Test Report: Docker_Linux_containerd_arm64 18485

                    
                      bdd124d1e5a6e86e5bd4f9e512befe1eefe531bd:2024-03-28:33775
                    
                

Test fail (8/335)

x
+
TestAddons/parallel/Ingress (37.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-340351 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-340351 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-340351 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [50c9f9b7-c90e-4d78-ab7a-6d8fb1e56afc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [50c9f9b7-c90e-4d78-ab7a-6d8fb1e56afc] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004296652s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-340351 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-340351 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-340351 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.098858914s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-340351 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-340351 addons disable ingress-dns --alsologtostderr -v=1: (1.024436565s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-340351 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-340351 addons disable ingress --alsologtostderr -v=1: (7.815468995s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-340351
helpers_test.go:235: (dbg) docker inspect addons-340351:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ddc972291ebed3b902f4d2fcb4f8dc9fed404aca7e445bfe9ec20bbc4f89be41",
	        "Created": "2024-03-28T03:33:50.879043322Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3256679,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-28T03:33:51.163351351Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d0f05b8b802e4c4af20a90d686bad8329f07849a8fda1b1d1c1dc3f527691df0",
	        "ResolvConfPath": "/var/lib/docker/containers/ddc972291ebed3b902f4d2fcb4f8dc9fed404aca7e445bfe9ec20bbc4f89be41/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ddc972291ebed3b902f4d2fcb4f8dc9fed404aca7e445bfe9ec20bbc4f89be41/hostname",
	        "HostsPath": "/var/lib/docker/containers/ddc972291ebed3b902f4d2fcb4f8dc9fed404aca7e445bfe9ec20bbc4f89be41/hosts",
	        "LogPath": "/var/lib/docker/containers/ddc972291ebed3b902f4d2fcb4f8dc9fed404aca7e445bfe9ec20bbc4f89be41/ddc972291ebed3b902f4d2fcb4f8dc9fed404aca7e445bfe9ec20bbc4f89be41-json.log",
	        "Name": "/addons-340351",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-340351:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-340351",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c6aac8c5b32311aeabba5bfe0fe045e961dba56c323d33612bfa94a7e36f714c-init/diff:/var/lib/docker/overlay2/30131fd39d8244f5536f8ed96d2d3a8ceec5075331a54f31974379c0fc24022e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c6aac8c5b32311aeabba5bfe0fe045e961dba56c323d33612bfa94a7e36f714c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c6aac8c5b32311aeabba5bfe0fe045e961dba56c323d33612bfa94a7e36f714c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c6aac8c5b32311aeabba5bfe0fe045e961dba56c323d33612bfa94a7e36f714c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-340351",
	                "Source": "/var/lib/docker/volumes/addons-340351/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-340351",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-340351",
	                "name.minikube.sigs.k8s.io": "addons-340351",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "362485c338d265f18b8f246e3c57edfa50d55f03fc51b2a73cb5710c90028c7f",
	            "SandboxKey": "/var/run/docker/netns/362485c338d2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36229"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36228"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36225"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36227"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36226"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-340351": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "d8f795ce1dbe7578c09d0062e640e183397fd16ae79c8ee0ea01c72d97c71ebb",
	                    "EndpointID": "f232e14030a89bf3efb8b49fe48cfe4d27bd34f25a8ef79c485ffd39359923f6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-340351",
	                        "ddc972291ebe"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-340351 -n addons-340351
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-340351 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-340351 logs -n 25: (1.516313982s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-613150   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC |                     |
	|         | -p download-only-613150              |                        |         |                |                     |                     |
	|         | --force --alsologtostderr            |                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3         |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| delete  | -p download-only-613150              | download-only-613150   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| start   | -o=json --download-only              | download-only-831467   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC |                     |
	|         | -p download-only-831467              |                        |         |                |                     |                     |
	|         | --force --alsologtostderr            |                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0  |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| delete  | -p download-only-831467              | download-only-831467   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| delete  | -p download-only-417144              | download-only-417144   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| delete  | -p download-only-613150              | download-only-613150   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| delete  | -p download-only-831467              | download-only-831467   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| start   | --download-only -p                   | download-docker-513448 | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC |                     |
	|         | download-docker-513448               |                        |         |                |                     |                     |
	|         | --alsologtostderr                    |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | -p download-docker-513448            | download-docker-513448 | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| start   | --download-only -p                   | binary-mirror-112527   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC |                     |
	|         | binary-mirror-112527                 |                        |         |                |                     |                     |
	|         | --alsologtostderr                    |                        |         |                |                     |                     |
	|         | --binary-mirror                      |                        |         |                |                     |                     |
	|         | http://127.0.0.1:33653               |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | -p binary-mirror-112527              | binary-mirror-112527   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| addons  | enable dashboard -p                  | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC |                     |
	|         | addons-340351                        |                        |         |                |                     |                     |
	| addons  | disable dashboard -p                 | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC |                     |
	|         | addons-340351                        |                        |         |                |                     |                     |
	| start   | -p addons-340351 --wait=true         | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:35 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |                |                     |                     |
	|         | --addons=registry                    |                        |         |                |                     |                     |
	|         | --addons=metrics-server              |                        |         |                |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |                |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |                |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |                |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |                |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	|         | --addons=ingress                     |                        |         |                |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |                |                     |                     |
	| ip      | addons-340351 ip                     | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:35 UTC | 28 Mar 24 03:35 UTC |
	| addons  | addons-340351 addons disable         | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:35 UTC | 28 Mar 24 03:35 UTC |
	|         | registry --alsologtostderr           |                        |         |                |                     |                     |
	|         | -v=1                                 |                        |         |                |                     |                     |
	| addons  | addons-340351 addons                 | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:36 UTC | 28 Mar 24 03:36 UTC |
	|         | disable metrics-server               |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:36 UTC | 28 Mar 24 03:36 UTC |
	|         | addons-340351                        |                        |         |                |                     |                     |
	| ssh     | addons-340351 ssh curl -s            | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:36 UTC | 28 Mar 24 03:36 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |                |                     |                     |
	|         | nginx.example.com'                   |                        |         |                |                     |                     |
	| ip      | addons-340351 ip                     | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:36 UTC | 28 Mar 24 03:36 UTC |
	| addons  | addons-340351 addons disable         | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:36 UTC | 28 Mar 24 03:36 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |                |                     |                     |
	|         | -v=1                                 |                        |         |                |                     |                     |
	| addons  | addons-340351 addons disable         | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:36 UTC | 28 Mar 24 03:36 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |                |                     |                     |
	| addons  | addons-340351 addons                 | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:36 UTC |                     |
	|         | disable csi-hostpath-driver          |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |                |                     |                     |
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 03:33:26
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 03:33:26.508915 3256235 out.go:291] Setting OutFile to fd 1 ...
	I0328 03:33:26.509097 3256235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:33:26.509110 3256235 out.go:304] Setting ErrFile to fd 2...
	I0328 03:33:26.509116 3256235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:33:26.509382 3256235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
	I0328 03:33:26.509885 3256235 out.go:298] Setting JSON to false
	I0328 03:33:26.510761 3256235 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":40544,"bootTime":1711556262,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0328 03:33:26.510831 3256235 start.go:139] virtualization:  
	I0328 03:33:26.547293 3256235 out.go:177] * [addons-340351] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 03:33:26.579004 3256235 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 03:33:26.579116 3256235 notify.go:220] Checking for updates...
	I0328 03:33:26.611212 3256235 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 03:33:26.650488 3256235 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 03:33:26.676013 3256235 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	I0328 03:33:26.708239 3256235 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 03:33:26.740501 3256235 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 03:33:26.772704 3256235 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 03:33:26.791323 3256235 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 03:33:26.791436 3256235 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 03:33:26.844367 3256235 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-28 03:33:26.833215235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 03:33:26.844503 3256235 docker.go:295] overlay module found
	I0328 03:33:26.867828 3256235 out.go:177] * Using the docker driver based on user configuration
	I0328 03:33:26.901058 3256235 start.go:297] selected driver: docker
	I0328 03:33:26.901086 3256235 start.go:901] validating driver "docker" against <nil>
	I0328 03:33:26.901100 3256235 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 03:33:26.901781 3256235 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 03:33:26.971375 3256235 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-28 03:33:26.962637936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 03:33:26.971549 3256235 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 03:33:26.971783 3256235 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 03:33:26.996147 3256235 out.go:177] * Using Docker driver with root privileges
	I0328 03:33:27.027928 3256235 cni.go:84] Creating CNI manager for ""
	I0328 03:33:27.027959 3256235 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 03:33:27.027970 3256235 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 03:33:27.028058 3256235 start.go:340] cluster config:
	{Name:addons-340351 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-340351 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 03:33:27.060847 3256235 out.go:177] * Starting "addons-340351" primary control-plane node in "addons-340351" cluster
	I0328 03:33:27.095630 3256235 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0328 03:33:27.121614 3256235 out.go:177] * Pulling base image v0.0.43-1711559786-18485 ...
	I0328 03:33:27.156251 3256235 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0328 03:33:27.156345 3256235 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0328 03:33:27.156358 3256235 cache.go:56] Caching tarball of preloaded images
	I0328 03:33:27.156259 3256235 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0328 03:33:27.156456 3256235 preload.go:173] Found /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 03:33:27.156473 3256235 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0328 03:33:27.157435 3256235 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/config.json ...
	I0328 03:33:27.157477 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/config.json: {Name:mkf325209a5b5c2613c0ed0e32acef3e8e2ab51a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:27.170640 3256235 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0328 03:33:27.170772 3256235 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory
	I0328 03:33:27.170797 3256235 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory, skipping pull
	I0328 03:33:27.170802 3256235 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in cache, skipping pull
	I0328 03:33:27.170811 3256235 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 as a tarball
	I0328 03:33:27.170822 3256235 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 from local cache
	I0328 03:33:43.388963 3256235 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 from cached tarball
	I0328 03:33:43.389004 3256235 cache.go:194] Successfully downloaded all kic artifacts
	I0328 03:33:43.389038 3256235 start.go:360] acquireMachinesLock for addons-340351: {Name:mk22ef88c2f18fdd8b3efd921e57718dafc59b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 03:33:43.389166 3256235 start.go:364] duration metric: took 104.85µs to acquireMachinesLock for "addons-340351"
	I0328 03:33:43.389205 3256235 start.go:93] Provisioning new machine with config: &{Name:addons-340351 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-340351 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0328 03:33:43.389284 3256235 start.go:125] createHost starting for "" (driver="docker")
	I0328 03:33:43.391815 3256235 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0328 03:33:43.392046 3256235 start.go:159] libmachine.API.Create for "addons-340351" (driver="docker")
	I0328 03:33:43.392081 3256235 client.go:168] LocalClient.Create starting
	I0328 03:33:43.392180 3256235 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem
	I0328 03:33:43.919443 3256235 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/cert.pem
	I0328 03:33:44.422231 3256235 cli_runner.go:164] Run: docker network inspect addons-340351 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0328 03:33:44.440692 3256235 cli_runner.go:211] docker network inspect addons-340351 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0328 03:33:44.440791 3256235 network_create.go:281] running [docker network inspect addons-340351] to gather additional debugging logs...
	I0328 03:33:44.440811 3256235 cli_runner.go:164] Run: docker network inspect addons-340351
	W0328 03:33:44.458186 3256235 cli_runner.go:211] docker network inspect addons-340351 returned with exit code 1
	I0328 03:33:44.458219 3256235 network_create.go:284] error running [docker network inspect addons-340351]: docker network inspect addons-340351: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-340351 not found
	I0328 03:33:44.458232 3256235 network_create.go:286] output of [docker network inspect addons-340351]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-340351 not found
	
	** /stderr **
	I0328 03:33:44.458356 3256235 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0328 03:33:44.474744 3256235 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400252b1c0}
	I0328 03:33:44.474789 3256235 network_create.go:124] attempt to create docker network addons-340351 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0328 03:33:44.474849 3256235 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-340351 addons-340351
	I0328 03:33:44.543201 3256235 network_create.go:108] docker network addons-340351 192.168.49.0/24 created
	I0328 03:33:44.543236 3256235 kic.go:121] calculated static IP "192.168.49.2" for the "addons-340351" container
	I0328 03:33:44.543327 3256235 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0328 03:33:44.556989 3256235 cli_runner.go:164] Run: docker volume create addons-340351 --label name.minikube.sigs.k8s.io=addons-340351 --label created_by.minikube.sigs.k8s.io=true
	I0328 03:33:44.572234 3256235 oci.go:103] Successfully created a docker volume addons-340351
	I0328 03:33:44.572347 3256235 cli_runner.go:164] Run: docker run --rm --name addons-340351-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-340351 --entrypoint /usr/bin/test -v addons-340351:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -d /var/lib
	I0328 03:33:46.532617 3256235 cli_runner.go:217] Completed: docker run --rm --name addons-340351-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-340351 --entrypoint /usr/bin/test -v addons-340351:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -d /var/lib: (1.960208397s)
	I0328 03:33:46.532648 3256235 oci.go:107] Successfully prepared a docker volume addons-340351
	I0328 03:33:46.532694 3256235 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0328 03:33:46.532716 3256235 kic.go:194] Starting extracting preloaded images to volume ...
	I0328 03:33:46.532800 3256235 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-340351:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -I lz4 -xf /preloaded.tar -C /extractDir
	I0328 03:33:50.817093 3256235 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-340351:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -I lz4 -xf /preloaded.tar -C /extractDir: (4.284253306s)
	I0328 03:33:50.817129 3256235 kic.go:203] duration metric: took 4.284409667s to extract preloaded images to volume ...
	W0328 03:33:50.817273 3256235 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0328 03:33:50.817391 3256235 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0328 03:33:50.865792 3256235 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-340351 --name addons-340351 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-340351 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-340351 --network addons-340351 --ip 192.168.49.2 --volume addons-340351:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82
	I0328 03:33:51.172683 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Running}}
	I0328 03:33:51.190960 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:33:51.210326 3256235 cli_runner.go:164] Run: docker exec addons-340351 stat /var/lib/dpkg/alternatives/iptables
	I0328 03:33:51.281258 3256235 oci.go:144] the created container "addons-340351" has a running status.
	I0328 03:33:51.281285 3256235 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa...
	I0328 03:33:51.664417 3256235 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0328 03:33:51.689443 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:33:51.707834 3256235 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0328 03:33:51.707857 3256235 kic_runner.go:114] Args: [docker exec --privileged addons-340351 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0328 03:33:51.780134 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:33:51.799495 3256235 machine.go:94] provisionDockerMachine start ...
	I0328 03:33:51.799744 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:51.823520 3256235 main.go:141] libmachine: Using SSH client type: native
	I0328 03:33:51.823781 3256235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36229 <nil> <nil>}
	I0328 03:33:51.823790 3256235 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 03:33:51.991749 3256235 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-340351
	
	I0328 03:33:51.991798 3256235 ubuntu.go:169] provisioning hostname "addons-340351"
	I0328 03:33:51.991882 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:52.014822 3256235 main.go:141] libmachine: Using SSH client type: native
	I0328 03:33:52.015073 3256235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36229 <nil> <nil>}
	I0328 03:33:52.015085 3256235 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-340351 && echo "addons-340351" | sudo tee /etc/hostname
	I0328 03:33:52.178786 3256235 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-340351
	
	I0328 03:33:52.178933 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:52.197272 3256235 main.go:141] libmachine: Using SSH client type: native
	I0328 03:33:52.197507 3256235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36229 <nil> <nil>}
	I0328 03:33:52.197523 3256235 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-340351' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-340351/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-340351' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 03:33:52.336340 3256235 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 03:33:52.336366 3256235 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18485-3249988/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-3249988/.minikube}
	I0328 03:33:52.336386 3256235 ubuntu.go:177] setting up certificates
	I0328 03:33:52.336398 3256235 provision.go:84] configureAuth start
	I0328 03:33:52.336483 3256235 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-340351
	I0328 03:33:52.352542 3256235 provision.go:143] copyHostCerts
	I0328 03:33:52.352633 3256235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-3249988/.minikube/cert.pem (1123 bytes)
	I0328 03:33:52.352770 3256235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-3249988/.minikube/key.pem (1675 bytes)
	I0328 03:33:52.352850 3256235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.pem (1078 bytes)
	I0328 03:33:52.352913 3256235 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca-key.pem org=jenkins.addons-340351 san=[127.0.0.1 192.168.49.2 addons-340351 localhost minikube]
	I0328 03:33:52.845534 3256235 provision.go:177] copyRemoteCerts
	I0328 03:33:52.845612 3256235 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 03:33:52.845653 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:52.860119 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:33:52.956812 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 03:33:52.979822 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0328 03:33:53.004380 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 03:33:53.029668 3256235 provision.go:87] duration metric: took 693.252709ms to configureAuth
	I0328 03:33:53.029694 3256235 ubuntu.go:193] setting minikube options for container-runtime
	I0328 03:33:53.029885 3256235 config.go:182] Loaded profile config "addons-340351": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 03:33:53.029902 3256235 machine.go:97] duration metric: took 1.230237745s to provisionDockerMachine
	I0328 03:33:53.029910 3256235 client.go:171] duration metric: took 9.637821806s to LocalClient.Create
	I0328 03:33:53.029924 3256235 start.go:167] duration metric: took 9.637877846s to libmachine.API.Create "addons-340351"
	I0328 03:33:53.029936 3256235 start.go:293] postStartSetup for "addons-340351" (driver="docker")
	I0328 03:33:53.029946 3256235 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 03:33:53.029998 3256235 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 03:33:53.030047 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:53.045455 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:33:53.141602 3256235 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 03:33:53.144806 3256235 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0328 03:33:53.144843 3256235 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0328 03:33:53.144855 3256235 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0328 03:33:53.144863 3256235 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0328 03:33:53.144872 3256235 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-3249988/.minikube/addons for local assets ...
	I0328 03:33:53.144942 3256235 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-3249988/.minikube/files for local assets ...
	I0328 03:33:53.144970 3256235 start.go:296] duration metric: took 115.02837ms for postStartSetup
	I0328 03:33:53.145278 3256235 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-340351
	I0328 03:33:53.159955 3256235 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/config.json ...
	I0328 03:33:53.160253 3256235 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 03:33:53.160299 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:53.175592 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:33:53.269005 3256235 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0328 03:33:53.273143 3256235 start.go:128] duration metric: took 9.883842827s to createHost
	I0328 03:33:53.273165 3256235 start.go:83] releasing machines lock for "addons-340351", held for 9.883987382s
	I0328 03:33:53.273248 3256235 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-340351
	I0328 03:33:53.287488 3256235 ssh_runner.go:195] Run: cat /version.json
	I0328 03:33:53.287540 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:53.287819 3256235 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 03:33:53.287879 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:53.311310 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:33:53.312404 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:33:53.404060 3256235 ssh_runner.go:195] Run: systemctl --version
	I0328 03:33:53.521112 3256235 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 03:33:53.525457 3256235 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0328 03:33:53.551240 3256235 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0328 03:33:53.551320 3256235 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 03:33:53.581554 3256235 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0328 03:33:53.581578 3256235 start.go:494] detecting cgroup driver to use...
	I0328 03:33:53.581612 3256235 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0328 03:33:53.581674 3256235 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 03:33:53.594431 3256235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 03:33:53.606660 3256235 docker.go:217] disabling cri-docker service (if available) ...
	I0328 03:33:53.606724 3256235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 03:33:53.621011 3256235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 03:33:53.635728 3256235 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 03:33:53.732581 3256235 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 03:33:53.828098 3256235 docker.go:233] disabling docker service ...
	I0328 03:33:53.828190 3256235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 03:33:53.848054 3256235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 03:33:53.859878 3256235 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 03:33:53.955546 3256235 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 03:33:54.055598 3256235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 03:33:54.066921 3256235 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 03:33:54.083894 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0328 03:33:54.094366 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 03:33:54.105158 3256235 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 03:33:54.105280 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 03:33:54.115512 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 03:33:54.126007 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 03:33:54.136717 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 03:33:54.147177 3256235 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 03:33:54.156924 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 03:33:54.166649 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 03:33:54.177236 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 03:33:54.188382 3256235 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 03:33:54.197034 3256235 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 03:33:54.205309 3256235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 03:33:54.304957 3256235 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 03:33:54.437461 3256235 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0328 03:33:54.437564 3256235 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0328 03:33:54.441000 3256235 start.go:562] Will wait 60s for crictl version
	I0328 03:33:54.441104 3256235 ssh_runner.go:195] Run: which crictl
	I0328 03:33:54.444290 3256235 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 03:33:54.479793 3256235 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0328 03:33:54.479944 3256235 ssh_runner.go:195] Run: containerd --version
	I0328 03:33:54.501763 3256235 ssh_runner.go:195] Run: containerd --version
	I0328 03:33:54.525311 3256235 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.6.28 ...
	I0328 03:33:54.526951 3256235 cli_runner.go:164] Run: docker network inspect addons-340351 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0328 03:33:54.539817 3256235 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0328 03:33:54.543213 3256235 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 03:33:54.553458 3256235 kubeadm.go:877] updating cluster {Name:addons-340351 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-340351 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 03:33:54.553579 3256235 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0328 03:33:54.553643 3256235 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 03:33:54.592751 3256235 containerd.go:627] all images are preloaded for containerd runtime.
	I0328 03:33:54.592778 3256235 containerd.go:534] Images already preloaded, skipping extraction
	I0328 03:33:54.592838 3256235 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 03:33:54.627335 3256235 containerd.go:627] all images are preloaded for containerd runtime.
	I0328 03:33:54.627357 3256235 cache_images.go:84] Images are preloaded, skipping loading
	I0328 03:33:54.627365 3256235 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.29.3 containerd true true} ...
	I0328 03:33:54.627465 3256235 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-340351 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-340351 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 03:33:54.627535 3256235 ssh_runner.go:195] Run: sudo crictl info
	I0328 03:33:54.668582 3256235 cni.go:84] Creating CNI manager for ""
	I0328 03:33:54.668608 3256235 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 03:33:54.668620 3256235 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 03:33:54.668650 3256235 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-340351 NodeName:addons-340351 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 03:33:54.668799 3256235 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-340351"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 03:33:54.668875 3256235 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 03:33:54.677577 3256235 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 03:33:54.677649 3256235 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 03:33:54.686204 3256235 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0328 03:33:54.704466 3256235 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 03:33:54.722444 3256235 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0328 03:33:54.740742 3256235 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0328 03:33:54.744495 3256235 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 03:33:54.755289 3256235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 03:33:54.845090 3256235 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 03:33:54.859574 3256235 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351 for IP: 192.168.49.2
	I0328 03:33:54.859648 3256235 certs.go:194] generating shared ca certs ...
	I0328 03:33:54.859678 3256235 certs.go:226] acquiring lock for ca certs: {Name:mk654727350d982ceeedd640f586ca1658e18559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:54.860517 3256235 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.key
	I0328 03:33:55.181825 3256235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.crt ...
	I0328 03:33:55.181865 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.crt: {Name:mk5797fdd0e7a871dd7cc8cb611c61502a1449b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:55.182804 3256235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.key ...
	I0328 03:33:55.182829 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.key: {Name:mk73aba0d6b2144be8203e586a02904016d466db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:55.183456 3256235 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.key
	I0328 03:33:55.425957 3256235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.crt ...
	I0328 03:33:55.425987 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.crt: {Name:mka02e0583616b5adccc14bc61748a76734feac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:55.426671 3256235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.key ...
	I0328 03:33:55.426688 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.key: {Name:mk128862586320a93862063d97690310c13a0509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:55.426782 3256235 certs.go:256] generating profile certs ...
	I0328 03:33:55.426854 3256235 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.key
	I0328 03:33:55.426878 3256235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt with IP's: []
	I0328 03:33:55.683495 3256235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt ...
	I0328 03:33:55.683526 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: {Name:mk0de8ba36d63ea102ad10d44d2bbf1c3143896f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:55.684495 3256235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.key ...
	I0328 03:33:55.684513 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.key: {Name:mk7e857fb1a9b5e19f2991afb08cc3f69c4a8183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:55.684607 3256235 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.key.84f93638
	I0328 03:33:55.684626 3256235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.crt.84f93638 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0328 03:33:56.042126 3256235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.crt.84f93638 ...
	I0328 03:33:56.042158 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.crt.84f93638: {Name:mk5a79ffee9a46ad2bef3c07aab9d891fd17073c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:56.043112 3256235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.key.84f93638 ...
	I0328 03:33:56.043132 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.key.84f93638: {Name:mked53764170661be89b46c5c68a0ab80bd6eeca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:56.043709 3256235 certs.go:381] copying /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.crt.84f93638 -> /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.crt
	I0328 03:33:56.043814 3256235 certs.go:385] copying /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.key.84f93638 -> /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.key
	I0328 03:33:56.043870 3256235 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.key
	I0328 03:33:56.043893 3256235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.crt with IP's: []
	I0328 03:33:56.304600 3256235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.crt ...
	I0328 03:33:56.304630 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.crt: {Name:mk393d71b89fbf8ff165f3c34812a846149bf605 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:56.305297 3256235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.key ...
	I0328 03:33:56.305315 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.key: {Name:mka6238dc4cca2c10224cdd08ce3ef020bb67f44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:56.305509 3256235 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca-key.pem (1679 bytes)
	I0328 03:33:56.305558 3256235 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem (1078 bytes)
	I0328 03:33:56.305587 3256235 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/cert.pem (1123 bytes)
	I0328 03:33:56.305621 3256235 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/key.pem (1675 bytes)
	I0328 03:33:56.306290 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 03:33:56.335502 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 03:33:56.358931 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 03:33:56.384113 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 03:33:56.409524 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0328 03:33:56.434079 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 03:33:56.458484 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 03:33:56.482962 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 03:33:56.507666 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 03:33:56.532003 3256235 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 03:33:56.550386 3256235 ssh_runner.go:195] Run: openssl version
	I0328 03:33:56.555819 3256235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 03:33:56.565408 3256235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 03:33:56.568709 3256235 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 28 03:33 /usr/share/ca-certificates/minikubeCA.pem
	I0328 03:33:56.568780 3256235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 03:33:56.575621 3256235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 03:33:56.584986 3256235 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 03:33:56.588103 3256235 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 03:33:56.588159 3256235 kubeadm.go:391] StartCluster: {Name:addons-340351 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-340351 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 03:33:56.588248 3256235 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0328 03:33:56.588343 3256235 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 03:33:56.631928 3256235 cri.go:89] found id: ""
	I0328 03:33:56.631996 3256235 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0328 03:33:56.640719 3256235 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 03:33:56.649299 3256235 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0328 03:33:56.649396 3256235 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 03:33:56.659849 3256235 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 03:33:56.659869 3256235 kubeadm.go:156] found existing configuration files:
	
	I0328 03:33:56.659921 3256235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 03:33:56.668308 3256235 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 03:33:56.668413 3256235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 03:33:56.676710 3256235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 03:33:56.685351 3256235 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 03:33:56.685425 3256235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 03:33:56.693577 3256235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 03:33:56.702370 3256235 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 03:33:56.702457 3256235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 03:33:56.710539 3256235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 03:33:56.719012 3256235 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 03:33:56.719106 3256235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 03:33:56.727307 3256235 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0328 03:33:56.818234 3256235 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1056-aws\n", err: exit status 1
	I0328 03:33:56.886479 3256235 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 03:34:14.047702 3256235 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 03:34:14.047774 3256235 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 03:34:14.047859 3256235 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0328 03:34:14.047925 3256235 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1056-aws
	I0328 03:34:14.047958 3256235 kubeadm.go:309] OS: Linux
	I0328 03:34:14.048001 3256235 kubeadm.go:309] CGROUPS_CPU: enabled
	I0328 03:34:14.048062 3256235 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0328 03:34:14.048109 3256235 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0328 03:34:14.048155 3256235 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0328 03:34:14.048201 3256235 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0328 03:34:14.048249 3256235 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0328 03:34:14.048292 3256235 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0328 03:34:14.048345 3256235 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0328 03:34:14.048390 3256235 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0328 03:34:14.048466 3256235 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 03:34:14.048556 3256235 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 03:34:14.048643 3256235 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 03:34:14.048702 3256235 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 03:34:14.050808 3256235 out.go:204]   - Generating certificates and keys ...
	I0328 03:34:14.050902 3256235 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 03:34:14.050965 3256235 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 03:34:14.051028 3256235 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0328 03:34:14.051086 3256235 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0328 03:34:14.051149 3256235 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0328 03:34:14.051197 3256235 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0328 03:34:14.051248 3256235 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0328 03:34:14.051361 3256235 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-340351 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0328 03:34:14.051411 3256235 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0328 03:34:14.051519 3256235 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-340351 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0328 03:34:14.051581 3256235 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0328 03:34:14.051641 3256235 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0328 03:34:14.051683 3256235 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0328 03:34:14.051736 3256235 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 03:34:14.051785 3256235 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 03:34:14.051839 3256235 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 03:34:14.051890 3256235 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 03:34:14.051950 3256235 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 03:34:14.052001 3256235 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 03:34:14.052078 3256235 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 03:34:14.052141 3256235 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 03:34:14.054318 3256235 out.go:204]   - Booting up control plane ...
	I0328 03:34:14.054516 3256235 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 03:34:14.054651 3256235 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 03:34:14.054768 3256235 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 03:34:14.054915 3256235 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 03:34:14.055031 3256235 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 03:34:14.055075 3256235 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 03:34:14.055235 3256235 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 03:34:14.055314 3256235 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.502594 seconds
	I0328 03:34:14.055424 3256235 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 03:34:14.055553 3256235 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 03:34:14.055613 3256235 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 03:34:14.055801 3256235 kubeadm.go:309] [mark-control-plane] Marking the node addons-340351 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 03:34:14.055858 3256235 kubeadm.go:309] [bootstrap-token] Using token: pmfpfc.jnx6gyasxx7a8lz7
	I0328 03:34:14.057824 3256235 out.go:204]   - Configuring RBAC rules ...
	I0328 03:34:14.057942 3256235 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 03:34:14.058036 3256235 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 03:34:14.058181 3256235 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 03:34:14.058325 3256235 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 03:34:14.058450 3256235 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 03:34:14.058545 3256235 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 03:34:14.058662 3256235 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 03:34:14.058706 3256235 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 03:34:14.058753 3256235 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 03:34:14.058757 3256235 kubeadm.go:309] 
	I0328 03:34:14.058819 3256235 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 03:34:14.058824 3256235 kubeadm.go:309] 
	I0328 03:34:14.058903 3256235 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 03:34:14.058907 3256235 kubeadm.go:309] 
	I0328 03:34:14.058933 3256235 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 03:34:14.058993 3256235 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 03:34:14.059045 3256235 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 03:34:14.059049 3256235 kubeadm.go:309] 
	I0328 03:34:14.059104 3256235 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 03:34:14.059108 3256235 kubeadm.go:309] 
	I0328 03:34:14.059157 3256235 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 03:34:14.059161 3256235 kubeadm.go:309] 
	I0328 03:34:14.059215 3256235 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 03:34:14.059293 3256235 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 03:34:14.059363 3256235 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 03:34:14.059367 3256235 kubeadm.go:309] 
	I0328 03:34:14.059454 3256235 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 03:34:14.059532 3256235 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 03:34:14.059537 3256235 kubeadm.go:309] 
	I0328 03:34:14.059623 3256235 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token pmfpfc.jnx6gyasxx7a8lz7 \
	I0328 03:34:14.059730 3256235 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:997a843960cc5d2f806bfa4fc2e7f3f771ce9ed1a8f2f9b600560484642e5094 \
	I0328 03:34:14.059751 3256235 kubeadm.go:309] 	--control-plane 
	I0328 03:34:14.059755 3256235 kubeadm.go:309] 
	I0328 03:34:14.059842 3256235 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 03:34:14.059846 3256235 kubeadm.go:309] 
	I0328 03:34:14.059931 3256235 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token pmfpfc.jnx6gyasxx7a8lz7 \
	I0328 03:34:14.060049 3256235 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:997a843960cc5d2f806bfa4fc2e7f3f771ce9ed1a8f2f9b600560484642e5094 
	I0328 03:34:14.060057 3256235 cni.go:84] Creating CNI manager for ""
	I0328 03:34:14.060064 3256235 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 03:34:14.062069 3256235 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0328 03:34:14.063851 3256235 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0328 03:34:14.068630 3256235 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0328 03:34:14.068686 3256235 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0328 03:34:14.113151 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0328 03:34:14.433874 3256235 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 03:34:14.434033 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:14.434115 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-340351 minikube.k8s.io/updated_at=2024_03_28T03_34_14_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=addons-340351 minikube.k8s.io/primary=true
	I0328 03:34:14.582691 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:14.582748 3256235 ops.go:34] apiserver oom_adj: -16
	I0328 03:34:15.083536 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:15.583428 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:16.082828 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:16.583668 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:17.083576 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:17.583802 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:18.083351 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:18.583178 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:19.083500 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:19.583787 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:20.083484 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:20.582827 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:21.083412 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:21.582989 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:22.083736 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:22.583161 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:23.083235 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:23.583081 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:24.082807 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:24.583066 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:25.083689 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:25.582830 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:26.083130 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:26.583005 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:26.674317 3256235 kubeadm.go:1107] duration metric: took 12.240342244s to wait for elevateKubeSystemPrivileges
	W0328 03:34:26.674368 3256235 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 03:34:26.674378 3256235 kubeadm.go:393] duration metric: took 30.086223096s to StartCluster
	I0328 03:34:26.674394 3256235 settings.go:142] acquiring lock: {Name:mkc9f345268bcac5ebc4aa579f709fe3221112b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:34:26.674944 3256235 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 03:34:26.675353 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/kubeconfig: {Name:mkf778b20fa7ee9827f7d3539ae3fbccd66af6b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:34:26.676109 3256235 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0328 03:34:26.676142 3256235 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0328 03:34:26.678410 3256235 out.go:177] * Verifying Kubernetes components...
	I0328 03:34:26.676501 3256235 config.go:182] Loaded profile config "addons-340351": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 03:34:26.676513 3256235 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0328 03:34:26.680244 3256235 addons.go:69] Setting yakd=true in profile "addons-340351"
	I0328 03:34:26.680278 3256235 addons.go:234] Setting addon yakd=true in "addons-340351"
	I0328 03:34:26.680308 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.680846 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.681028 3256235 addons.go:69] Setting ingress-dns=true in profile "addons-340351"
	I0328 03:34:26.681052 3256235 addons.go:234] Setting addon ingress-dns=true in "addons-340351"
	I0328 03:34:26.681087 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.681456 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.681889 3256235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 03:34:26.682068 3256235 addons.go:69] Setting inspektor-gadget=true in profile "addons-340351"
	I0328 03:34:26.682096 3256235 addons.go:234] Setting addon inspektor-gadget=true in "addons-340351"
	I0328 03:34:26.682130 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.682495 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.682750 3256235 addons.go:69] Setting cloud-spanner=true in profile "addons-340351"
	I0328 03:34:26.682799 3256235 addons.go:234] Setting addon cloud-spanner=true in "addons-340351"
	I0328 03:34:26.682840 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.683302 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.684771 3256235 addons.go:69] Setting metrics-server=true in profile "addons-340351"
	I0328 03:34:26.684807 3256235 addons.go:234] Setting addon metrics-server=true in "addons-340351"
	I0328 03:34:26.684841 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.685222 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.691650 3256235 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-340351"
	I0328 03:34:26.691748 3256235 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-340351"
	I0328 03:34:26.691886 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.692383 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.700464 3256235 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-340351"
	I0328 03:34:26.705661 3256235 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-340351"
	I0328 03:34:26.701811 3256235 addons.go:69] Setting default-storageclass=true in profile "addons-340351"
	I0328 03:34:26.701829 3256235 addons.go:69] Setting gcp-auth=true in profile "addons-340351"
	I0328 03:34:26.701840 3256235 addons.go:69] Setting ingress=true in profile "addons-340351"
	I0328 03:34:26.702410 3256235 addons.go:69] Setting registry=true in profile "addons-340351"
	I0328 03:34:26.702424 3256235 addons.go:69] Setting storage-provisioner=true in profile "addons-340351"
	I0328 03:34:26.702430 3256235 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-340351"
	I0328 03:34:26.702439 3256235 addons.go:69] Setting volumesnapshots=true in profile "addons-340351"
	I0328 03:34:26.708918 3256235 addons.go:234] Setting addon volumesnapshots=true in "addons-340351"
	I0328 03:34:26.709074 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.714209 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.715733 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.716207 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.717305 3256235 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-340351"
	I0328 03:34:26.718243 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.717339 3256235 mustload.go:65] Loading cluster: addons-340351
	I0328 03:34:26.733171 3256235 config.go:182] Loaded profile config "addons-340351": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 03:34:26.733520 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.717353 3256235 addons.go:234] Setting addon ingress=true in "addons-340351"
	I0328 03:34:26.717364 3256235 addons.go:234] Setting addon registry=true in "addons-340351"
	I0328 03:34:26.762418 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.717389 3256235 addons.go:234] Setting addon storage-provisioner=true in "addons-340351"
	I0328 03:34:26.770411 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.770724 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.771132 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.717401 3256235 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-340351"
	I0328 03:34:26.786899 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.770125 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.796811 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.813710 3256235 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0328 03:34:26.829180 3256235 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0328 03:34:26.829259 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0328 03:34:26.829361 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:26.851741 3256235 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0328 03:34:26.853678 3256235 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0328 03:34:26.855486 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0328 03:34:26.855506 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0328 03:34:26.855582 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:26.853832 3256235 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0328 03:34:26.857681 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0328 03:34:26.857769 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:26.853838 3256235 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0328 03:34:26.893273 3256235 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0328 03:34:26.893297 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0328 03:34:26.893364 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:26.913701 3256235 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0328 03:34:26.919425 3256235 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0328 03:34:26.919454 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0328 03:34:26.919543 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:26.968271 3256235 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0328 03:34:26.966383 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.967881 3256235 addons.go:234] Setting addon default-storageclass=true in "addons-340351"
	I0328 03:34:26.970521 3256235 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 03:34:26.970548 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 03:34:26.970617 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:26.968561 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.976269 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.990553 3256235 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 03:34:26.992586 3256235 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 03:34:26.992611 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 03:34:26.992684 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:27.080405 3256235 out.go:177]   - Using image docker.io/registry:2.8.3
	I0328 03:34:27.054031 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.054086 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.054542 3256235 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-340351"
	I0328 03:34:27.055444 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.087568 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0328 03:34:27.088217 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:27.089210 3256235 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0328 03:34:27.090906 3256235 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0328 03:34:27.090926 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0328 03:34:27.090982 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:27.089205 3256235 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0328 03:34:27.089198 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0328 03:34:27.089695 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:27.091480 3256235 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 03:34:27.092991 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 03:34:27.093082 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:27.094593 3256235 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0328 03:34:27.094617 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0328 03:34:27.094674 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:27.105519 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0328 03:34:27.112119 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0328 03:34:27.113999 3256235 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0328 03:34:27.120507 3256235 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0328 03:34:27.122098 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0328 03:34:27.123579 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0328 03:34:27.122377 3256235 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0328 03:34:27.126486 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0328 03:34:27.126560 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:27.128767 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0328 03:34:27.152799 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0328 03:34:27.158310 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0328 03:34:27.160010 3256235 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0328 03:34:27.160031 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0328 03:34:27.160092 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:27.158405 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.166197 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.241954 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.256418 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.265126 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.276771 3256235 out.go:177]   - Using image docker.io/busybox:stable
	I0328 03:34:27.270873 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.270928 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.282110 3256235 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0328 03:34:27.279422 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.281655 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.284301 3256235 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0328 03:34:27.284316 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0328 03:34:27.284485 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	W0328 03:34:27.298770 3256235 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0328 03:34:27.298801 3256235 retry.go:31] will retry after 263.276035ms: ssh: handshake failed: EOF
	I0328 03:34:27.308171 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.713514 3256235 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.037370238s)
	I0328 03:34:27.713773 3256235 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0328 03:34:27.713881 3256235 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.031976114s)
	I0328 03:34:27.713981 3256235 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 03:34:27.770759 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0328 03:34:27.770785 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0328 03:34:27.926932 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 03:34:27.999153 3256235 node_ready.go:35] waiting up to 6m0s for node "addons-340351" to be "Ready" ...
	I0328 03:34:28.008788 3256235 node_ready.go:49] node "addons-340351" has status "Ready":"True"
	I0328 03:34:28.008869 3256235 node_ready.go:38] duration metric: took 9.563492ms for node "addons-340351" to be "Ready" ...
	I0328 03:34:28.008904 3256235 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 03:34:28.009533 3256235 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0328 03:34:28.009607 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0328 03:34:28.030833 3256235 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4t7xb" in "kube-system" namespace to be "Ready" ...
	I0328 03:34:28.053843 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 03:34:28.058697 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0328 03:34:28.059989 3256235 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0328 03:34:28.060069 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0328 03:34:28.083522 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0328 03:34:28.114751 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0328 03:34:28.114826 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0328 03:34:28.147109 3256235 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0328 03:34:28.147213 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0328 03:34:28.154450 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0328 03:34:28.171344 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0328 03:34:28.184104 3256235 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0328 03:34:28.184128 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0328 03:34:28.225581 3256235 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0328 03:34:28.225614 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0328 03:34:28.241321 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0328 03:34:28.296159 3256235 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 03:34:28.296185 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0328 03:34:28.340474 3256235 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0328 03:34:28.340501 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0328 03:34:28.379964 3256235 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0328 03:34:28.379986 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0328 03:34:28.401857 3256235 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0328 03:34:28.401887 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0328 03:34:28.409427 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0328 03:34:28.409454 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0328 03:34:28.512777 3256235 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0328 03:34:28.512806 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0328 03:34:28.523003 3256235 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0328 03:34:28.523027 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0328 03:34:28.653310 3256235 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0328 03:34:28.653343 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0328 03:34:28.699865 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0328 03:34:28.711008 3256235 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 03:34:28.711082 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 03:34:28.753421 3256235 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0328 03:34:28.753496 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0328 03:34:28.756017 3256235 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0328 03:34:28.756100 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0328 03:34:28.787386 3256235 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 03:34:28.787463 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 03:34:28.807874 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0328 03:34:28.807952 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0328 03:34:28.829372 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0328 03:34:28.869188 3256235 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0328 03:34:28.869261 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0328 03:34:28.903085 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 03:34:28.921059 3256235 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0328 03:34:28.921136 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0328 03:34:28.925455 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0328 03:34:28.925527 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0328 03:34:29.013812 3256235 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0328 03:34:29.013886 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0328 03:34:29.067907 3256235 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0328 03:34:29.067979 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0328 03:34:29.071312 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0328 03:34:29.071384 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0328 03:34:29.259964 3256235 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0328 03:34:29.260037 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0328 03:34:29.322386 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0328 03:34:29.338736 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0328 03:34:29.338810 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0328 03:34:29.494902 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0328 03:34:29.574066 3256235 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0328 03:34:29.574141 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0328 03:34:29.580559 3256235 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.866732139s)
	I0328 03:34:29.580638 3256235 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0328 03:34:29.742971 3256235 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0328 03:34:29.743043 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0328 03:34:29.931892 3256235 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0328 03:34:29.931966 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0328 03:34:30.040102 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:30.064420 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0328 03:34:30.085813 3256235 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-340351" context rescaled to 1 replicas
	I0328 03:34:32.041698 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:32.058125 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.131156061s)
	I0328 03:34:32.058181 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.004238801s)
	I0328 03:34:32.058412 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.974818065s)
	I0328 03:34:32.058586 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.904050895s)
	I0328 03:34:32.058200 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.999424781s)
	I0328 03:34:32.058729 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.887289295s)
	W0328 03:34:32.077487 3256235 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0328 03:34:33.978200 3256235 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0328 03:34:33.978314 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:34.024807 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:34.315928 3256235 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0328 03:34:34.421015 3256235 addons.go:234] Setting addon gcp-auth=true in "addons-340351"
	I0328 03:34:34.421119 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:34.421642 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:34.445088 3256235 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0328 03:34:34.445140 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:34.471458 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:34.534178 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.292766667s)
	I0328 03:34:34.534221 3256235 addons.go:470] Verifying addon ingress=true in "addons-340351"
	I0328 03:34:34.537520 3256235 out.go:177] * Verifying ingress addon...
	I0328 03:34:34.534483 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.834584745s)
	I0328 03:34:34.534529 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.705087273s)
	I0328 03:34:34.534581 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.631426932s)
	I0328 03:34:34.534663 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.212201882s)
	I0328 03:34:34.534715 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.039749266s)
	I0328 03:34:34.541178 3256235 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0328 03:34:34.543497 3256235 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-340351 service yakd-dashboard -n yakd-dashboard
	
	I0328 03:34:34.541536 3256235 addons.go:470] Verifying addon metrics-server=true in "addons-340351"
	I0328 03:34:34.541553 3256235 addons.go:470] Verifying addon registry=true in "addons-340351"
	W0328 03:34:34.541599 3256235 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0328 03:34:34.549454 3256235 out.go:177] * Verifying registry addon...
	I0328 03:34:34.543941 3256235 retry.go:31] will retry after 364.40068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0328 03:34:34.547439 3256235 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0328 03:34:34.547898 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:34.549654 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:34.553429 3256235 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0328 03:34:34.559949 3256235 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0328 03:34:34.560031 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:34.915081 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0328 03:34:35.047777 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:35.062488 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:35.585995 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:35.590613 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:35.700020 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.635492796s)
	I0328 03:34:35.700057 3256235 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-340351"
	I0328 03:34:35.702462 3256235 out.go:177] * Verifying csi-hostpath-driver addon...
	I0328 03:34:35.700267 3256235 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.255156795s)
	I0328 03:34:35.712899 3256235 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0328 03:34:35.710969 3256235 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0328 03:34:35.726275 3256235 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0328 03:34:35.724987 3256235 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0328 03:34:35.729266 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:35.729351 3256235 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0328 03:34:35.729367 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0328 03:34:35.800891 3256235 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0328 03:34:35.800920 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0328 03:34:35.825119 3256235 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0328 03:34:35.825145 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0328 03:34:35.851051 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0328 03:34:36.062814 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:36.066432 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:36.225257 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:36.545601 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:36.558533 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:36.614077 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.69895122s)
	I0328 03:34:36.725231 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:36.988061 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.136966684s)
	I0328 03:34:36.990904 3256235 addons.go:470] Verifying addon gcp-auth=true in "addons-340351"
	I0328 03:34:36.994726 3256235 out.go:177] * Verifying gcp-auth addon...
	I0328 03:34:36.998267 3256235 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0328 03:34:37.006996 3256235 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0328 03:34:37.007019 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:37.047274 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:37.052298 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:37.059050 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:37.234383 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:37.502397 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:37.545786 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:37.558884 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:37.724950 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:38.003255 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:38.047160 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:38.060290 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:38.225188 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:38.502564 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:38.546475 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:38.558619 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:38.725199 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:39.004065 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:39.046508 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:39.059127 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:39.225087 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:39.502851 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:39.538662 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:39.547094 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:39.565271 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:39.727106 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:40.019787 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:40.063046 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:40.064602 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:40.225394 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:40.501926 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:40.547846 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:40.560831 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:40.727871 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:41.012666 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:41.054073 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:41.062368 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:41.228065 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:41.512474 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:41.546587 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:41.558432 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:41.725059 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:42.005281 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:42.038630 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:42.046391 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:42.058442 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:42.224719 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:42.502649 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:42.546346 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:42.559530 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:42.725278 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:43.004645 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:43.047827 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:43.059008 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:43.225126 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:43.502083 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:43.550305 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:43.559258 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:43.724670 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:44.006181 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:44.046964 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:44.059283 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:44.223781 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:44.505292 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:44.538905 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:44.546263 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:44.559167 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:44.724859 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:45.033757 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:45.051207 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:45.067066 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:45.226297 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:45.501905 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:45.545882 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:45.558631 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:45.727459 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:46.002534 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:46.045950 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:46.058853 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:46.225933 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:46.502566 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:46.545253 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:46.558741 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:46.724077 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:47.003328 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:47.045879 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:47.046736 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:47.058168 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:47.224612 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:47.502687 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:47.545849 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:47.558350 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:47.724543 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:48.009413 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:48.047094 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:48.059589 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:48.224311 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:48.502073 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:48.545716 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:48.558520 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:48.724856 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:49.004770 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:49.047063 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:49.058907 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:49.225196 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:49.502842 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:49.537492 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:49.547088 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:49.559091 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:49.723818 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:50.019853 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:50.047070 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:50.059382 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:50.224894 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:50.502505 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:50.546198 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:50.558970 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:50.723919 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:51.004809 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:51.045880 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:51.059593 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:51.224886 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:51.502627 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:51.537881 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:51.546229 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:51.561072 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:51.723815 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:52.010244 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:52.045650 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:52.058551 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:52.223887 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:52.502426 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:52.545364 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:52.562721 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:52.724488 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:53.004785 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:53.046260 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:53.058924 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:53.223736 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:53.501996 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:53.546246 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:53.559083 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:53.724534 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:54.008865 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:54.040964 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:54.047183 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:54.060008 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:54.224572 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:54.501950 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:54.545916 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:54.558575 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:54.724096 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:55.011728 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:55.047172 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:55.059746 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:55.226396 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:55.502899 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:55.545485 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:55.558972 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:55.724107 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:56.004828 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:56.046445 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:56.058091 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:56.223560 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:56.503410 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:56.538853 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:56.546210 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:56.558840 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:56.723817 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:57.004716 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:57.046345 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:57.059234 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:57.224064 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:57.502277 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:57.546307 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:57.559137 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:57.723938 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:58.003389 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:58.047318 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:58.059482 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:58.225163 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:58.501977 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:58.546363 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:58.559008 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:58.723999 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:59.005404 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:59.037944 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:59.046636 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:59.058029 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:59.225876 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:59.502709 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:59.545846 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:59.558424 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:59.724136 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:00.021640 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:00.113518 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:00.128767 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:00.241017 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:00.504352 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:00.546191 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:00.560181 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:00.723938 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:01.004540 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:01.038474 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:35:01.046581 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:01.058323 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:01.224953 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:01.503528 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:01.548212 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:01.562618 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:01.727758 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:02.004117 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:02.047304 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:02.059009 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:02.226327 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:02.502428 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:02.545974 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:02.562815 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:02.725861 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:03.010296 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:03.046392 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:03.059223 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:03.224260 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:03.502813 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:03.538790 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:35:03.545899 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:03.559424 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:03.724688 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:04.020750 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:04.047099 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:04.059399 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:04.226264 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:04.503207 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:04.547347 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:04.559203 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:04.733223 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:05.008217 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:05.047263 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:05.060444 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:05.225441 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:05.502471 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:05.539034 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:35:05.546764 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:05.559521 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:05.728983 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:06.003835 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:06.049169 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:06.062068 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:06.225633 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:06.502565 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:06.547426 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:06.558723 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:06.725301 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:07.013357 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:07.048089 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:07.059927 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:07.224782 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:07.502886 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:07.546525 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:07.558318 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:07.724542 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:08.009609 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:08.039027 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:35:08.046914 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:08.060973 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:08.225121 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:08.504521 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:08.545841 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:08.559089 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:08.725545 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:09.003864 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:09.046383 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:09.059568 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:09.225272 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:09.503080 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:09.547216 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:09.561770 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:09.724573 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:10.026659 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:10.044065 3256235 pod_ready.go:92] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"True"
	I0328 03:35:10.044095 3256235 pod_ready.go:81] duration metric: took 42.013170326s for pod "coredns-76f75df574-4t7xb" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.044107 3256235 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-d54jf" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.048370 3256235 pod_ready.go:97] error getting pod "coredns-76f75df574-d54jf" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-d54jf" not found
	I0328 03:35:10.048413 3256235 pod_ready.go:81] duration metric: took 4.268362ms for pod "coredns-76f75df574-d54jf" in "kube-system" namespace to be "Ready" ...
	E0328 03:35:10.048427 3256235 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-d54jf" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-d54jf" not found
	I0328 03:35:10.048461 3256235 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.051836 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:10.057110 3256235 pod_ready.go:92] pod "etcd-addons-340351" in "kube-system" namespace has status "Ready":"True"
	I0328 03:35:10.057133 3256235 pod_ready.go:81] duration metric: took 8.658483ms for pod "etcd-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.057181 3256235 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.061294 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:10.066906 3256235 pod_ready.go:92] pod "kube-apiserver-addons-340351" in "kube-system" namespace has status "Ready":"True"
	I0328 03:35:10.066929 3256235 pod_ready.go:81] duration metric: took 9.715553ms for pod "kube-apiserver-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.066942 3256235 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.080659 3256235 pod_ready.go:92] pod "kube-controller-manager-addons-340351" in "kube-system" namespace has status "Ready":"True"
	I0328 03:35:10.080695 3256235 pod_ready.go:81] duration metric: took 13.744233ms for pod "kube-controller-manager-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.080708 3256235 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-29lc9" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.224828 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:10.235436 3256235 pod_ready.go:92] pod "kube-proxy-29lc9" in "kube-system" namespace has status "Ready":"True"
	I0328 03:35:10.235468 3256235 pod_ready.go:81] duration metric: took 154.752868ms for pod "kube-proxy-29lc9" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.235480 3256235 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.502471 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:10.550785 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:10.564207 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:10.635433 3256235 pod_ready.go:92] pod "kube-scheduler-addons-340351" in "kube-system" namespace has status "Ready":"True"
	I0328 03:35:10.635460 3256235 pod_ready.go:81] duration metric: took 399.971514ms for pod "kube-scheduler-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.635471 3256235 pod_ready.go:38] duration metric: took 42.626531442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 03:35:10.635486 3256235 api_server.go:52] waiting for apiserver process to appear ...
	I0328 03:35:10.635550 3256235 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 03:35:10.654512 3256235 api_server.go:72] duration metric: took 43.978337442s to wait for apiserver process to appear ...
	I0328 03:35:10.654545 3256235 api_server.go:88] waiting for apiserver healthz status ...
	I0328 03:35:10.654566 3256235 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0328 03:35:10.662733 3256235 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0328 03:35:10.664038 3256235 api_server.go:141] control plane version: v1.29.3
	I0328 03:35:10.664063 3256235 api_server.go:131] duration metric: took 9.510882ms to wait for apiserver health ...
	I0328 03:35:10.664072 3256235 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 03:35:10.724381 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:10.842105 3256235 system_pods.go:59] 18 kube-system pods found
	I0328 03:35:10.842142 3256235 system_pods.go:61] "coredns-76f75df574-4t7xb" [5af40e4a-d195-4c14-85cb-5de85be714fa] Running
	I0328 03:35:10.842152 3256235 system_pods.go:61] "csi-hostpath-attacher-0" [fcc8acbf-a1b1-4585-9ad0-d490f65f1171] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0328 03:35:10.842176 3256235 system_pods.go:61] "csi-hostpath-resizer-0" [f455f10e-f271-4bba-8b18-8ced67632a6d] Running
	I0328 03:35:10.842199 3256235 system_pods.go:61] "csi-hostpathplugin-pjsbd" [6d3f46d2-ad55-4e5b-88be-12e3ca376390] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0328 03:35:10.842214 3256235 system_pods.go:61] "etcd-addons-340351" [6c2b4711-647d-46dd-9b0d-9cc19c44f521] Running
	I0328 03:35:10.842219 3256235 system_pods.go:61] "kindnet-67627" [8ba8509a-1bee-481d-ab65-7aa3b7161a46] Running
	I0328 03:35:10.842227 3256235 system_pods.go:61] "kube-apiserver-addons-340351" [46c6af0e-cfde-4823-b0fa-9b90caa3ca7b] Running
	I0328 03:35:10.842231 3256235 system_pods.go:61] "kube-controller-manager-addons-340351" [a9bc9778-74a8-47eb-a1cc-773b8d33b514] Running
	I0328 03:35:10.842241 3256235 system_pods.go:61] "kube-ingress-dns-minikube" [f35953a6-96f4-48f7-a782-631feac05115] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0328 03:35:10.842255 3256235 system_pods.go:61] "kube-proxy-29lc9" [d8072468-8899-4ea7-a9f1-c8be947568f4] Running
	I0328 03:35:10.842260 3256235 system_pods.go:61] "kube-scheduler-addons-340351" [7b32c249-e353-4f2d-8444-a5215aa66c54] Running
	I0328 03:35:10.842266 3256235 system_pods.go:61] "metrics-server-69cf46c98-87zwk" [912dbcd4-98b5-4145-a0ad-4cfa8d5f457c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 03:35:10.842274 3256235 system_pods.go:61] "nvidia-device-plugin-daemonset-24zx7" [87d15db8-a090-4212-9d30-443f2319b151] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0328 03:35:10.842280 3256235 system_pods.go:61] "registry-l2d8j" [efbdf6d1-f769-43b5-92a9-b4b43129bbc9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0328 03:35:10.842291 3256235 system_pods.go:61] "registry-proxy-qdjhx" [48e81d37-f08c-4677-a66e-2dc91903192d] Running
	I0328 03:35:10.842298 3256235 system_pods.go:61] "snapshot-controller-58dbcc7b99-c7fr6" [cd05cbb1-c073-40de-8a48-8ec98af0c76a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0328 03:35:10.842303 3256235 system_pods.go:61] "snapshot-controller-58dbcc7b99-w6n4b" [a37ef814-9ca2-4124-9732-97d999919dd4] Running
	I0328 03:35:10.842313 3256235 system_pods.go:61] "storage-provisioner" [28a444ff-2b92-49df-ba5a-adf2135bd722] Running
	I0328 03:35:10.842321 3256235 system_pods.go:74] duration metric: took 178.24206ms to wait for pod list to return data ...
	I0328 03:35:10.842330 3256235 default_sa.go:34] waiting for default service account to be created ...
	I0328 03:35:11.006441 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:11.034730 3256235 default_sa.go:45] found service account: "default"
	I0328 03:35:11.034756 3256235 default_sa.go:55] duration metric: took 192.415744ms for default service account to be created ...
	I0328 03:35:11.034766 3256235 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 03:35:11.048374 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:11.059991 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:11.224770 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:11.241540 3256235 system_pods.go:86] 18 kube-system pods found
	I0328 03:35:11.241575 3256235 system_pods.go:89] "coredns-76f75df574-4t7xb" [5af40e4a-d195-4c14-85cb-5de85be714fa] Running
	I0328 03:35:11.241585 3256235 system_pods.go:89] "csi-hostpath-attacher-0" [fcc8acbf-a1b1-4585-9ad0-d490f65f1171] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0328 03:35:11.241591 3256235 system_pods.go:89] "csi-hostpath-resizer-0" [f455f10e-f271-4bba-8b18-8ced67632a6d] Running
	I0328 03:35:11.241599 3256235 system_pods.go:89] "csi-hostpathplugin-pjsbd" [6d3f46d2-ad55-4e5b-88be-12e3ca376390] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0328 03:35:11.241605 3256235 system_pods.go:89] "etcd-addons-340351" [6c2b4711-647d-46dd-9b0d-9cc19c44f521] Running
	I0328 03:35:11.241609 3256235 system_pods.go:89] "kindnet-67627" [8ba8509a-1bee-481d-ab65-7aa3b7161a46] Running
	I0328 03:35:11.241614 3256235 system_pods.go:89] "kube-apiserver-addons-340351" [46c6af0e-cfde-4823-b0fa-9b90caa3ca7b] Running
	I0328 03:35:11.241618 3256235 system_pods.go:89] "kube-controller-manager-addons-340351" [a9bc9778-74a8-47eb-a1cc-773b8d33b514] Running
	I0328 03:35:11.241626 3256235 system_pods.go:89] "kube-ingress-dns-minikube" [f35953a6-96f4-48f7-a782-631feac05115] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0328 03:35:11.241631 3256235 system_pods.go:89] "kube-proxy-29lc9" [d8072468-8899-4ea7-a9f1-c8be947568f4] Running
	I0328 03:35:11.241645 3256235 system_pods.go:89] "kube-scheduler-addons-340351" [7b32c249-e353-4f2d-8444-a5215aa66c54] Running
	I0328 03:35:11.241651 3256235 system_pods.go:89] "metrics-server-69cf46c98-87zwk" [912dbcd4-98b5-4145-a0ad-4cfa8d5f457c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 03:35:11.241660 3256235 system_pods.go:89] "nvidia-device-plugin-daemonset-24zx7" [87d15db8-a090-4212-9d30-443f2319b151] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0328 03:35:11.241675 3256235 system_pods.go:89] "registry-l2d8j" [efbdf6d1-f769-43b5-92a9-b4b43129bbc9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0328 03:35:11.241680 3256235 system_pods.go:89] "registry-proxy-qdjhx" [48e81d37-f08c-4677-a66e-2dc91903192d] Running
	I0328 03:35:11.241687 3256235 system_pods.go:89] "snapshot-controller-58dbcc7b99-c7fr6" [cd05cbb1-c073-40de-8a48-8ec98af0c76a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0328 03:35:11.241701 3256235 system_pods.go:89] "snapshot-controller-58dbcc7b99-w6n4b" [a37ef814-9ca2-4124-9732-97d999919dd4] Running
	I0328 03:35:11.241705 3256235 system_pods.go:89] "storage-provisioner" [28a444ff-2b92-49df-ba5a-adf2135bd722] Running
	I0328 03:35:11.241713 3256235 system_pods.go:126] duration metric: took 206.940197ms to wait for k8s-apps to be running ...
	I0328 03:35:11.241721 3256235 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 03:35:11.241783 3256235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 03:35:11.255008 3256235 system_svc.go:56] duration metric: took 13.277112ms WaitForService to wait for kubelet
	I0328 03:35:11.255036 3256235 kubeadm.go:576] duration metric: took 44.578865931s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 03:35:11.255056 3256235 node_conditions.go:102] verifying NodePressure condition ...
	I0328 03:35:11.435640 3256235 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0328 03:35:11.435675 3256235 node_conditions.go:123] node cpu capacity is 2
	I0328 03:35:11.435688 3256235 node_conditions.go:105] duration metric: took 180.626912ms to run NodePressure ...
	I0328 03:35:11.435722 3256235 start.go:240] waiting for startup goroutines ...
	I0328 03:35:11.502660 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:11.547936 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:11.559239 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:11.726197 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:12.002666 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:12.048212 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:12.059884 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:12.227903 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:12.503090 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:12.562187 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:12.570710 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:12.727568 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:13.002664 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:13.048470 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:13.094501 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:13.227084 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:13.503672 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:13.550674 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:13.563609 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:13.726443 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:14.003320 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:14.046574 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:14.059325 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:14.225045 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:14.502527 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:14.547163 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:14.570572 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:14.724956 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:15.002177 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:15.046440 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:15.059203 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:15.225256 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:15.502262 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:15.554779 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:15.566317 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:15.730292 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:16.004015 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:16.046721 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:16.060524 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:16.227167 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:16.502183 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:16.546222 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:16.558822 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:16.725046 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:17.003491 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:17.046635 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:17.058252 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:17.223983 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:17.505984 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:17.546716 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:17.559355 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:17.725329 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:18.012151 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:18.046244 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:18.060023 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:18.226011 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:18.502985 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:18.546515 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:18.559103 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:18.725462 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:19.004560 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:19.045959 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:19.059107 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:19.227406 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:19.502166 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:19.545640 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:19.558482 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:19.725118 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:20.003823 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:20.046721 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:20.061356 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:20.224401 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:20.502018 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:20.546673 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:20.558985 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:20.727706 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:21.005781 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:21.053795 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:21.059203 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:21.224230 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:21.503228 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:21.547172 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:21.559919 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:21.724275 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:22.011168 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:22.045932 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:22.058920 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:22.224367 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:22.502771 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:22.546166 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:22.559358 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:22.725266 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:23.005939 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:23.046759 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:23.059448 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:23.224782 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:23.502569 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:23.546490 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:23.558038 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:23.728098 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:24.005967 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:24.051732 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:24.058372 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:24.230541 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:24.502596 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:24.546295 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:24.560014 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:24.727789 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:25.003904 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:25.047151 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:25.059279 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:25.224632 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:25.507450 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:25.551296 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:25.565613 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:25.727734 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:26.003525 3256235 kapi.go:107] duration metric: took 49.005258624s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0328 03:35:26.006309 3256235 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-340351 cluster.
	I0328 03:35:26.008622 3256235 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0328 03:35:26.010634 3256235 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0328 03:35:26.047471 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:26.058572 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:26.225622 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:26.549552 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:26.559581 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:26.724746 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:27.047538 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:27.059469 3256235 kapi.go:107] duration metric: took 52.506038387s to wait for kubernetes.io/minikube-addons=registry ...
	I0328 03:35:27.232002 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:27.547392 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:27.724761 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:28.047132 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:28.225696 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:28.546683 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:28.724831 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:29.045542 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:29.226063 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:29.545923 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:29.725108 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:30.051994 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:30.227295 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:30.546308 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:30.723407 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:31.046808 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:31.224934 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:31.545951 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:31.725584 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:32.046984 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:32.228560 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:32.546138 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:32.725202 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:33.046260 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:33.225712 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:33.545779 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:33.724396 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:34.045653 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:34.225849 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:34.545420 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:34.724199 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:35.046564 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:35.224440 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:35.545803 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:35.725549 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:36.046399 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:36.224477 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:36.547154 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:36.725233 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:37.051797 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:37.224470 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:37.546399 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:37.723866 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:38.046275 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:38.225624 3256235 kapi.go:107] duration metric: took 1m2.51465373s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0328 03:35:38.547175 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:39.047101 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:39.545917 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:40.048121 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:40.546171 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:41.047123 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:41.546285 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:42.046255 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:42.554823 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:43.045215 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:43.545654 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:44.051773 3256235 kapi.go:107] duration metric: took 1m9.510582976s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0328 03:35:44.056456 3256235 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, nvidia-device-plugin, default-storageclass, inspektor-gadget, metrics-server, yakd, volumesnapshots, gcp-auth, registry, csi-hostpath-driver, ingress
	I0328 03:35:44.059076 3256235 addons.go:505] duration metric: took 1m17.382551328s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner nvidia-device-plugin default-storageclass inspektor-gadget metrics-server yakd volumesnapshots gcp-auth registry csi-hostpath-driver ingress]
	I0328 03:35:44.059142 3256235 start.go:245] waiting for cluster config update ...
	I0328 03:35:44.059163 3256235 start.go:254] writing updated cluster config ...
	I0328 03:35:44.059500 3256235 ssh_runner.go:195] Run: rm -f paused
	I0328 03:35:44.408626 3256235 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 03:35:44.411068 3256235 out.go:177] * Done! kubectl is now configured to use "addons-340351" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	c525c9eedb856       dd1b12fcb6097       9 seconds ago        Exited              hello-world-app              2                   7f3190163f6d5       hello-world-app-5d77478584-gzhzb
	8424d7ac30795       b8c82647e8a25       33 seconds ago       Running             nginx                        0                   7bea4ddaaa56f       nginx
	e15a95ffb70b4       7ce2150c8929b       About a minute ago   Running             local-path-provisioner       0                   98a1709d49969       local-path-provisioner-78b46b4d5c-78cz5
	c037b7b5ccb3a       20e3f2db01e81       About a minute ago   Running             yakd                         0                   9bb2de1289b0e       yakd-dashboard-9947fc6bf-mhzbx
	c2b9ca89f109e       6ef582f3ec844       About a minute ago   Running             gcp-auth                     0                   8c4d3f5f59a90       gcp-auth-7d69788767-drfvk
	40f862debf865       6727f8bc3105d       About a minute ago   Running             cloud-spanner-emulator       0                   07464a7bfdb5f       cloud-spanner-emulator-5446596998-79w62
	59b00ae571e28       c0cfb4ce73bda       About a minute ago   Running             nvidia-device-plugin-ctr     0                   29f6e03789329       nvidia-device-plugin-daemonset-24zx7
	458d4a3adf46d       1a024e390dd05       About a minute ago   Exited              patch                        2                   69d2456192928       ingress-nginx-admission-patch-9qxlj
	4f495b3b532f3       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller   0                   cf1648da81a77       snapshot-controller-58dbcc7b99-c7fr6
	fcf58965ab350       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller   0                   0bb1467a02dbd       snapshot-controller-58dbcc7b99-w6n4b
	8d1c112873817       2437cf7621777       About a minute ago   Running             coredns                      0                   59d25e2c99736       coredns-76f75df574-4t7xb
	c6edd75053d61       1a024e390dd05       About a minute ago   Exited              create                       0                   383e76396edb1       ingress-nginx-admission-create-lcmws
	9388e1fc0827b       ba04bb24b9575       2 minutes ago        Running             storage-provisioner          0                   bb330c58b668d       storage-provisioner
	33103b22f5e2e       4740c1948d3fc       2 minutes ago        Running             kindnet-cni                  0                   86fe81901b85b       kindnet-67627
	0b803a1e6aaec       0e9b4a0d1e86d       2 minutes ago        Running             kube-proxy                   0                   3b34d3414c3e4       kube-proxy-29lc9
	ba25d3bfe399f       4b51f9f6bc9b9       2 minutes ago        Running             kube-scheduler               0                   1c13a8367c3a4       kube-scheduler-addons-340351
	20b33063be704       014faa467e297       2 minutes ago        Running             etcd                         0                   42aeddac88c83       etcd-addons-340351
	e0f96c58a12b1       121d70d9a3805       2 minutes ago        Running             kube-controller-manager      0                   8d4a0e561b27e       kube-controller-manager-addons-340351
	d594d62b87501       2581114f5709d       2 minutes ago        Running             kube-apiserver               0                   ac8ed49880f7e       kube-apiserver-addons-340351
	
	
	==> containerd <==
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.256191877Z" level=error msg="ContainerStatus for \"948696db5e310c036c7c54534ff0fc70cc5a9b1e77183037b15857410098b122\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"948696db5e310c036c7c54534ff0fc70cc5a9b1e77183037b15857410098b122\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.256567752Z" level=error msg="ContainerStatus for \"07c4958f219480de7db830629f902342042656c6dcefa6fbd7d9896c3d628d5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07c4958f219480de7db830629f902342042656c6dcefa6fbd7d9896c3d628d5f\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.256907861Z" level=error msg="ContainerStatus for \"e0224383b34f1d405e8fa8979076a26c745768bf4c0b3b2179e0c2c2ad095bc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0224383b34f1d405e8fa8979076a26c745768bf4c0b3b2179e0c2c2ad095bc8\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.257292581Z" level=error msg="ContainerStatus for \"4ebe5bddae6da0f669c9fe67822241e89e13dfe42af38473725eb8ce6f2f65e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ebe5bddae6da0f669c9fe67822241e89e13dfe42af38473725eb8ce6f2f65e0\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.257623534Z" level=error msg="ContainerStatus for \"cbe1a96bfa96b037dc6eeae4b46d95c446d1d5ac28907ee1952640b0ebdbc758\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbe1a96bfa96b037dc6eeae4b46d95c446d1d5ac28907ee1952640b0ebdbc758\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.258014735Z" level=error msg="ContainerStatus for \"97409b53e29361b17c8d929f029595237c6d562e22a0d7885f494e61016f8628\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97409b53e29361b17c8d929f029595237c6d562e22a0d7885f494e61016f8628\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.258419803Z" level=error msg="ContainerStatus for \"948696db5e310c036c7c54534ff0fc70cc5a9b1e77183037b15857410098b122\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"948696db5e310c036c7c54534ff0fc70cc5a9b1e77183037b15857410098b122\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.258752659Z" level=error msg="ContainerStatus for \"07c4958f219480de7db830629f902342042656c6dcefa6fbd7d9896c3d628d5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07c4958f219480de7db830629f902342042656c6dcefa6fbd7d9896c3d628d5f\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.259335043Z" level=error msg="ContainerStatus for \"e0224383b34f1d405e8fa8979076a26c745768bf4c0b3b2179e0c2c2ad095bc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0224383b34f1d405e8fa8979076a26c745768bf4c0b3b2179e0c2c2ad095bc8\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.259705502Z" level=error msg="ContainerStatus for \"4ebe5bddae6da0f669c9fe67822241e89e13dfe42af38473725eb8ce6f2f65e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ebe5bddae6da0f669c9fe67822241e89e13dfe42af38473725eb8ce6f2f65e0\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.260033247Z" level=error msg="ContainerStatus for \"cbe1a96bfa96b037dc6eeae4b46d95c446d1d5ac28907ee1952640b0ebdbc758\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbe1a96bfa96b037dc6eeae4b46d95c446d1d5ac28907ee1952640b0ebdbc758\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.260514768Z" level=error msg="ContainerStatus for \"97409b53e29361b17c8d929f029595237c6d562e22a0d7885f494e61016f8628\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97409b53e29361b17c8d929f029595237c6d562e22a0d7885f494e61016f8628\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.261086354Z" level=error msg="ContainerStatus for \"948696db5e310c036c7c54534ff0fc70cc5a9b1e77183037b15857410098b122\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"948696db5e310c036c7c54534ff0fc70cc5a9b1e77183037b15857410098b122\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.261448781Z" level=error msg="ContainerStatus for \"07c4958f219480de7db830629f902342042656c6dcefa6fbd7d9896c3d628d5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07c4958f219480de7db830629f902342042656c6dcefa6fbd7d9896c3d628d5f\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.261793772Z" level=error msg="ContainerStatus for \"e0224383b34f1d405e8fa8979076a26c745768bf4c0b3b2179e0c2c2ad095bc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0224383b34f1d405e8fa8979076a26c745768bf4c0b3b2179e0c2c2ad095bc8\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.262126054Z" level=error msg="ContainerStatus for \"4ebe5bddae6da0f669c9fe67822241e89e13dfe42af38473725eb8ce6f2f65e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ebe5bddae6da0f669c9fe67822241e89e13dfe42af38473725eb8ce6f2f65e0\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.262579793Z" level=error msg="ContainerStatus for \"cbe1a96bfa96b037dc6eeae4b46d95c446d1d5ac28907ee1952640b0ebdbc758\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbe1a96bfa96b037dc6eeae4b46d95c446d1d5ac28907ee1952640b0ebdbc758\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.262942827Z" level=error msg="ContainerStatus for \"97409b53e29361b17c8d929f029595237c6d562e22a0d7885f494e61016f8628\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97409b53e29361b17c8d929f029595237c6d562e22a0d7885f494e61016f8628\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.263288204Z" level=error msg="ContainerStatus for \"948696db5e310c036c7c54534ff0fc70cc5a9b1e77183037b15857410098b122\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"948696db5e310c036c7c54534ff0fc70cc5a9b1e77183037b15857410098b122\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.263629495Z" level=error msg="ContainerStatus for \"07c4958f219480de7db830629f902342042656c6dcefa6fbd7d9896c3d628d5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07c4958f219480de7db830629f902342042656c6dcefa6fbd7d9896c3d628d5f\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.263970105Z" level=error msg="ContainerStatus for \"e0224383b34f1d405e8fa8979076a26c745768bf4c0b3b2179e0c2c2ad095bc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0224383b34f1d405e8fa8979076a26c745768bf4c0b3b2179e0c2c2ad095bc8\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.264311782Z" level=error msg="ContainerStatus for \"4ebe5bddae6da0f669c9fe67822241e89e13dfe42af38473725eb8ce6f2f65e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ebe5bddae6da0f669c9fe67822241e89e13dfe42af38473725eb8ce6f2f65e0\": not found"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.265729917Z" level=info msg="RemoveContainer for \"86e1bcdda20954e37c423de64b34e180ae26842d9520ab667fb49523d3710904\""
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.271576499Z" level=info msg="RemoveContainer for \"86e1bcdda20954e37c423de64b34e180ae26842d9520ab667fb49523d3710904\" returns successfully"
	Mar 28 03:36:49 addons-340351 containerd[761]: time="2024-03-28T03:36:49.272108340Z" level=error msg="ContainerStatus for \"86e1bcdda20954e37c423de64b34e180ae26842d9520ab667fb49523d3710904\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86e1bcdda20954e37c423de64b34e180ae26842d9520ab667fb49523d3710904\": not found"
	
	
	==> coredns [8d1c112873817d2b1615acd08b3a26b2a7436958f846ea54704dc771aac6e24e] <==
	[INFO] 10.244.0.20:37995 - 24958 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034058s
	[INFO] 10.244.0.20:39566 - 122 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053767s
	[INFO] 10.244.0.20:58497 - 360 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031688s
	[INFO] 10.244.0.20:48196 - 37035 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026568s
	[INFO] 10.244.0.20:48196 - 743 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001549692s
	[INFO] 10.244.0.20:39566 - 56174 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059002s
	[INFO] 10.244.0.20:58497 - 3953 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002073444s
	[INFO] 10.244.0.20:48196 - 21582 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000883265s
	[INFO] 10.244.0.20:39566 - 10768 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059477s
	[INFO] 10.244.0.20:48196 - 10754 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057885s
	[INFO] 10.244.0.20:58497 - 29508 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000861039s
	[INFO] 10.244.0.20:39566 - 32604 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038514s
	[INFO] 10.244.0.20:39566 - 22284 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046703s
	[INFO] 10.244.0.20:58497 - 57720 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000030489s
	[INFO] 10.244.0.20:39566 - 8334 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069758s
	[INFO] 10.244.0.20:39566 - 28348 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001149243s
	[INFO] 10.244.0.20:39566 - 48992 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000994672s
	[INFO] 10.244.0.20:50134 - 1846 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046424s
	[INFO] 10.244.0.20:37995 - 21533 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001717483s
	[INFO] 10.244.0.20:39566 - 39567 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000102857s
	[INFO] 10.244.0.20:50134 - 52174 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002117742s
	[INFO] 10.244.0.20:37995 - 48382 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.005684437s
	[INFO] 10.244.0.20:50134 - 11654 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.005069348s
	[INFO] 10.244.0.20:50134 - 471 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057402s
	[INFO] 10.244.0.20:37995 - 41038 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00002724s
	
	
	==> describe nodes <==
	Name:               addons-340351
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-340351
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=addons-340351
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T03_34_14_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-340351
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 03:34:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-340351
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 03:36:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 03:36:47 +0000   Thu, 28 Mar 2024 03:34:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 03:36:47 +0000   Thu, 28 Mar 2024 03:34:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 03:36:47 +0000   Thu, 28 Mar 2024 03:34:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 03:36:47 +0000   Thu, 28 Mar 2024 03:34:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-340351
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022568Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022568Ki
	  pods:               110
	System Info:
	  Machine ID:                 c929d13281ab461f9ae34957a1d9a2b2
	  System UUID:                68cfd376-65a2-46b7-bd23-4c7605fa936a
	  Boot ID:                    6d3ffb57-9092-48f6-a12c-685c1918590f
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5446596998-79w62    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  default                     hello-world-app-5d77478584-gzhzb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  gcp-auth                    gcp-auth-7d69788767-drfvk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 coredns-76f75df574-4t7xb                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m27s
	  kube-system                 etcd-addons-340351                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m39s
	  kube-system                 kindnet-67627                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m27s
	  kube-system                 kube-apiserver-addons-340351               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 kube-controller-manager-addons-340351      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 kube-proxy-29lc9                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-scheduler-addons-340351               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 nvidia-device-plugin-daemonset-24zx7       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 snapshot-controller-58dbcc7b99-c7fr6       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 snapshot-controller-58dbcc7b99-w6n4b       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  local-path-storage          local-path-provisioner-78b46b4d5c-78cz5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-mhzbx             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m48s (x8 over 2m48s)  kubelet          Node addons-340351 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m48s (x8 over 2m48s)  kubelet          Node addons-340351 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m48s (x7 over 2m48s)  kubelet          Node addons-340351 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m39s                  kubelet          Node addons-340351 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m39s                  kubelet          Node addons-340351 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m39s                  kubelet          Node addons-340351 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m39s                  kubelet          Node addons-340351 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m39s                  kubelet          Node addons-340351 status is now: NodeReady
	  Normal  RegisteredNode           2m27s                  node-controller  Node addons-340351 event: Registered Node addons-340351 in Controller
	
	
	==> dmesg <==
	[  +0.001027] FS-Cache: O-key=[8] '6ae0c90000000000'
	[  +0.000690] FS-Cache: N-cookie c=0000009c [p=00000093 fl=2 nc=0 na=1]
	[  +0.000921] FS-Cache: N-cookie d=000000007dd5da0e{9p.inode} n=0000000088d31df7
	[  +0.001082] FS-Cache: N-key=[8] '6ae0c90000000000'
	[  +0.002528] FS-Cache: Duplicate cookie detected
	[  +0.000747] FS-Cache: O-cookie c=00000096 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001280] FS-Cache: O-cookie d=000000007dd5da0e{9p.inode} n=0000000032bd8fff
	[  +0.001075] FS-Cache: O-key=[8] '6ae0c90000000000'
	[  +0.000794] FS-Cache: N-cookie c=0000009d [p=00000093 fl=2 nc=0 na=1]
	[  +0.000920] FS-Cache: N-cookie d=000000007dd5da0e{9p.inode} n=000000007eb28088
	[  +0.001194] FS-Cache: N-key=[8] '6ae0c90000000000'
	[  +2.528101] FS-Cache: Duplicate cookie detected
	[  +0.000686] FS-Cache: O-cookie c=00000094 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001234] FS-Cache: O-cookie d=000000007dd5da0e{9p.inode} n=00000000a2709a24
	[  +0.001172] FS-Cache: O-key=[8] '69e0c90000000000'
	[  +0.000678] FS-Cache: N-cookie c=0000009f [p=00000093 fl=2 nc=0 na=1]
	[  +0.000905] FS-Cache: N-cookie d=000000007dd5da0e{9p.inode} n=0000000088d31df7
	[  +0.001002] FS-Cache: N-key=[8] '69e0c90000000000'
	[  +0.307773] FS-Cache: Duplicate cookie detected
	[  +0.000920] FS-Cache: O-cookie c=00000099 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001024] FS-Cache: O-cookie d=000000007dd5da0e{9p.inode} n=00000000b2051977
	[  +0.001032] FS-Cache: O-key=[8] '6fe0c90000000000'
	[  +0.000724] FS-Cache: N-cookie c=000000a0 [p=00000093 fl=2 nc=0 na=1]
	[  +0.000925] FS-Cache: N-cookie d=000000007dd5da0e{9p.inode} n=00000000670d971e
	[  +0.001075] FS-Cache: N-key=[8] '6fe0c90000000000'
	
	
	==> etcd [20b33063be704cb6eec5d3b9ce758f0e449df1c4d71e973f82bcc91b96466b84] <==
	{"level":"info","ts":"2024-03-28T03:34:06.545302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-28T03:34:06.552916Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-28T03:34:06.597386Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-28T03:34:06.602302Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-28T03:34:06.60461Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-28T03:34:06.620509Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-28T03:34:06.620479Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-28T03:34:06.793239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-28T03:34:06.793465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-28T03:34:06.793602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-28T03:34:06.793688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-28T03:34:06.793783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-28T03:34:06.793867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-28T03:34:06.793954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-28T03:34:06.803327Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T03:34:06.808512Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-340351 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T03:34:06.808733Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T03:34:06.80919Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T03:34:06.810942Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-28T03:34:06.820505Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T03:34:06.820676Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-28T03:34:06.822262Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-03-28T03:34:06.84073Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T03:34:06.841023Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T03:34:06.841164Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [c2b9ca89f109eecbba6a26d6ff5a32983f58993d8a660e8408bfa7d08cf71c82] <==
	2024/03/28 03:35:25 GCP Auth Webhook started!
	2024/03/28 03:35:55 Ready to marshal response ...
	2024/03/28 03:35:55 Ready to write response ...
	2024/03/28 03:36:18 Ready to marshal response ...
	2024/03/28 03:36:18 Ready to write response ...
	2024/03/28 03:36:18 Ready to marshal response ...
	2024/03/28 03:36:18 Ready to write response ...
	2024/03/28 03:36:27 Ready to marshal response ...
	2024/03/28 03:36:27 Ready to write response ...
	2024/03/28 03:36:39 Ready to marshal response ...
	2024/03/28 03:36:39 Ready to write response ...
	
	
	==> kernel <==
	 03:36:53 up 11:19,  0 users,  load average: 3.60, 3.93, 3.47
	Linux addons-340351 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [33103b22f5e2e6087942fc63016d85dfd0e2c61c6a1709289b68fee655322d1c] <==
	I0328 03:34:58.197403       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0328 03:34:58.210548       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:34:58.210581       1 main.go:227] handling current node
	I0328 03:35:08.227106       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:35:08.227139       1 main.go:227] handling current node
	I0328 03:35:18.241097       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:35:18.241124       1 main.go:227] handling current node
	I0328 03:35:28.253809       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:35:28.253834       1 main.go:227] handling current node
	I0328 03:35:38.258021       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:35:38.258062       1 main.go:227] handling current node
	I0328 03:35:48.270509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:35:48.270547       1 main.go:227] handling current node
	I0328 03:35:58.274943       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:35:58.274975       1 main.go:227] handling current node
	I0328 03:36:08.284518       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:36:08.284554       1 main.go:227] handling current node
	I0328 03:36:18.296545       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:36:18.296817       1 main.go:227] handling current node
	I0328 03:36:28.331388       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:36:28.331417       1 main.go:227] handling current node
	I0328 03:36:38.343680       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:36:38.343710       1 main.go:227] handling current node
	I0328 03:36:48.357027       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:36:48.357057       1 main.go:227] handling current node
	
	
	==> kube-apiserver [d594d62b875012a62e08e3f1844dd8ef7059b086bc7cec4b2ff305cc89ec476c] <==
	W0328 03:34:33.283553       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 03:34:33.283568       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 03:34:33.285254       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0328 03:34:33.315777       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0328 03:34:34.256796       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.97.222.90"}
	I0328 03:34:34.278327       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.111.220.237"}
	I0328 03:34:34.352233       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0328 03:34:35.435734       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.106.251.69"}
	I0328 03:34:35.450245       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0328 03:34:35.629422       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.102.213.148"}
	I0328 03:34:36.818094       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.96.96.58"}
	W0328 03:35:22.763072       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 03:35:22.763140       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0328 03:35:22.764143       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.123.168:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.123.168:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.123.168:443: connect: connection refused
	E0328 03:35:22.765367       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.123.168:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.123.168:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.123.168:443: connect: connection refused
	E0328 03:35:22.770232       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.123.168:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.123.168:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.123.168:443: connect: connection refused
	I0328 03:35:22.879830       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0328 03:36:12.358899       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0328 03:36:13.392416       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0328 03:36:17.937285       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0328 03:36:18.276072       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.1.148"}
	I0328 03:36:23.778396       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0328 03:36:27.078877       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0328 03:36:27.970648       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.118.125"}
	
	
	==> kube-controller-manager [e0f96c58a12b1920bb47425b1f4980aef254a7e7c11b731f70007f9b4d8391a6] <==
	E0328 03:36:22.537572       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0328 03:36:26.575626       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 03:36:26.575676       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 03:36:27.045713       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 03:36:27.045939       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 03:36:27.725933       1 event.go:376] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0328 03:36:27.736602       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-gzhzb"
	I0328 03:36:27.749698       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="24.369241ms"
	I0328 03:36:27.788629       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.877243ms"
	I0328 03:36:27.806414       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="17.727481ms"
	I0328 03:36:27.806607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="153.169µs"
	I0328 03:36:29.505854       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0328 03:36:29.506533       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0328 03:36:30.110646       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 03:36:30.110687       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0328 03:36:31.082979       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="92.412µs"
	I0328 03:36:32.059624       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.453µs"
	I0328 03:36:33.064238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="85.372µs"
	I0328 03:36:39.254705       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0328 03:36:45.140029       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="155.778µs"
	I0328 03:36:45.236081       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="7.532µs"
	I0328 03:36:45.241856       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0328 03:36:45.280799       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0328 03:36:48.641727       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0328 03:36:48.736741       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	
	
	==> kube-proxy [0b803a1e6aaec0ee43426f9b9d1d0e0424055aaa20012a6a57ab961a25c387a3] <==
	I0328 03:34:27.982153       1 server_others.go:72] "Using iptables proxy"
	I0328 03:34:28.041502       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0328 03:34:28.126596       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0328 03:34:28.126627       1 server_others.go:168] "Using iptables Proxier"
	I0328 03:34:28.137110       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0328 03:34:28.137133       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0328 03:34:28.137165       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 03:34:28.137363       1 server.go:865] "Version info" version="v1.29.3"
	I0328 03:34:28.137374       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 03:34:28.148305       1 config.go:188] "Starting service config controller"
	I0328 03:34:28.148347       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 03:34:28.148381       1 config.go:97] "Starting endpoint slice config controller"
	I0328 03:34:28.148386       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 03:34:28.148732       1 config.go:315] "Starting node config controller"
	I0328 03:34:28.148739       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 03:34:28.248918       1 shared_informer.go:318] Caches are synced for node config
	I0328 03:34:28.248965       1 shared_informer.go:318] Caches are synced for service config
	I0328 03:34:28.249017       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ba25d3bfe399f4cc75bc23cd361e00c423b4b8b267b8b047f72f4bcb09894c1d] <==
	W0328 03:34:10.797436       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 03:34:10.797455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0328 03:34:10.797559       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 03:34:10.797577       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 03:34:10.797654       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 03:34:10.797671       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 03:34:10.797753       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 03:34:10.797769       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 03:34:10.797847       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0328 03:34:10.797884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0328 03:34:10.797978       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0328 03:34:10.797995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0328 03:34:10.798081       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0328 03:34:10.798142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0328 03:34:10.798237       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 03:34:10.798257       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0328 03:34:10.798323       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 03:34:10.798340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0328 03:34:11.759447       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 03:34:11.759670       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 03:34:11.780942       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0328 03:34:11.781165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 03:34:11.841524       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 03:34:11.841565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 03:34:12.176556       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.262289    1490 scope.go:117] "RemoveContainer" containerID="cbe1a96bfa96b037dc6eeae4b46d95c446d1d5ac28907ee1952640b0ebdbc758"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.262723    1490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbe1a96bfa96b037dc6eeae4b46d95c446d1d5ac28907ee1952640b0ebdbc758"} err="failed to get container status \"cbe1a96bfa96b037dc6eeae4b46d95c446d1d5ac28907ee1952640b0ebdbc758\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbe1a96bfa96b037dc6eeae4b46d95c446d1d5ac28907ee1952640b0ebdbc758\": not found"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.262748    1490 scope.go:117] "RemoveContainer" containerID="97409b53e29361b17c8d929f029595237c6d562e22a0d7885f494e61016f8628"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.263083    1490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"97409b53e29361b17c8d929f029595237c6d562e22a0d7885f494e61016f8628"} err="failed to get container status \"97409b53e29361b17c8d929f029595237c6d562e22a0d7885f494e61016f8628\": rpc error: code = NotFound desc = an error occurred when try to find container \"97409b53e29361b17c8d929f029595237c6d562e22a0d7885f494e61016f8628\": not found"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.263110    1490 scope.go:117] "RemoveContainer" containerID="948696db5e310c036c7c54534ff0fc70cc5a9b1e77183037b15857410098b122"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.263435    1490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"948696db5e310c036c7c54534ff0fc70cc5a9b1e77183037b15857410098b122"} err="failed to get container status \"948696db5e310c036c7c54534ff0fc70cc5a9b1e77183037b15857410098b122\": rpc error: code = NotFound desc = an error occurred when try to find container \"948696db5e310c036c7c54534ff0fc70cc5a9b1e77183037b15857410098b122\": not found"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.263462    1490 scope.go:117] "RemoveContainer" containerID="07c4958f219480de7db830629f902342042656c6dcefa6fbd7d9896c3d628d5f"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.263770    1490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"07c4958f219480de7db830629f902342042656c6dcefa6fbd7d9896c3d628d5f"} err="failed to get container status \"07c4958f219480de7db830629f902342042656c6dcefa6fbd7d9896c3d628d5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"07c4958f219480de7db830629f902342042656c6dcefa6fbd7d9896c3d628d5f\": not found"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.263799    1490 scope.go:117] "RemoveContainer" containerID="e0224383b34f1d405e8fa8979076a26c745768bf4c0b3b2179e0c2c2ad095bc8"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.264104    1490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0224383b34f1d405e8fa8979076a26c745768bf4c0b3b2179e0c2c2ad095bc8"} err="failed to get container status \"e0224383b34f1d405e8fa8979076a26c745768bf4c0b3b2179e0c2c2ad095bc8\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0224383b34f1d405e8fa8979076a26c745768bf4c0b3b2179e0c2c2ad095bc8\": not found"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.264134    1490 scope.go:117] "RemoveContainer" containerID="4ebe5bddae6da0f669c9fe67822241e89e13dfe42af38473725eb8ce6f2f65e0"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.264472    1490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ebe5bddae6da0f669c9fe67822241e89e13dfe42af38473725eb8ce6f2f65e0"} err="failed to get container status \"4ebe5bddae6da0f669c9fe67822241e89e13dfe42af38473725eb8ce6f2f65e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ebe5bddae6da0f669c9fe67822241e89e13dfe42af38473725eb8ce6f2f65e0\": not found"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.264498    1490 scope.go:117] "RemoveContainer" containerID="86e1bcdda20954e37c423de64b34e180ae26842d9520ab667fb49523d3710904"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.271829    1490 scope.go:117] "RemoveContainer" containerID="86e1bcdda20954e37c423de64b34e180ae26842d9520ab667fb49523d3710904"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: E0328 03:36:49.272283    1490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86e1bcdda20954e37c423de64b34e180ae26842d9520ab667fb49523d3710904\": not found" containerID="86e1bcdda20954e37c423de64b34e180ae26842d9520ab667fb49523d3710904"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.272361    1490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86e1bcdda20954e37c423de64b34e180ae26842d9520ab667fb49523d3710904"} err="failed to get container status \"86e1bcdda20954e37c423de64b34e180ae26842d9520ab667fb49523d3710904\": rpc error: code = NotFound desc = an error occurred when try to find container \"86e1bcdda20954e37c423de64b34e180ae26842d9520ab667fb49523d3710904\": not found"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.290850    1490 reconciler_common.go:300] "Volume detached for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fcc8acbf-a1b1-4585-9ad0-d490f65f1171-socket-dir\") on node \"addons-340351\" DevicePath \"\""
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.290893    1490 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7lqbr\" (UniqueName: \"kubernetes.io/projected/f455f10e-f271-4bba-8b18-8ced67632a6d-kube-api-access-7lqbr\") on node \"addons-340351\" DevicePath \"\""
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.290908    1490 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gpghg\" (UniqueName: \"kubernetes.io/projected/fcc8acbf-a1b1-4585-9ad0-d490f65f1171-kube-api-access-gpghg\") on node \"addons-340351\" DevicePath \"\""
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.290920    1490 reconciler_common.go:300] "Volume detached for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f455f10e-f271-4bba-8b18-8ced67632a6d-socket-dir\") on node \"addons-340351\" DevicePath \"\""
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.299917    1490 csi_plugin.go:183] kubernetes.io/csi: registrationHandler.DeRegisterPlugin request for plugin hostpath.csi.k8s.io
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.987103    1490 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3f46d2-ad55-4e5b-88be-12e3ca376390" path="/var/lib/kubelet/pods/6d3f46d2-ad55-4e5b-88be-12e3ca376390/volumes"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.987830    1490 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac274503-85fe-4e33-a505-bb66d4824506" path="/var/lib/kubelet/pods/ac274503-85fe-4e33-a505-bb66d4824506/volumes"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.988207    1490 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f455f10e-f271-4bba-8b18-8ced67632a6d" path="/var/lib/kubelet/pods/f455f10e-f271-4bba-8b18-8ced67632a6d/volumes"
	Mar 28 03:36:49 addons-340351 kubelet[1490]: I0328 03:36:49.988642    1490 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc8acbf-a1b1-4585-9ad0-d490f65f1171" path="/var/lib/kubelet/pods/fcc8acbf-a1b1-4585-9ad0-d490f65f1171/volumes"
	
	
	==> storage-provisioner [9388e1fc0827b2ec62acd1adb3f7ebe22c40ec106491f57e26c0ed8790b641bc] <==
	I0328 03:34:33.276912       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0328 03:34:33.311892       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0328 03:34:33.311941       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0328 03:34:33.360020       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0328 03:34:33.360198       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-340351_ccb5f49e-f2ee-4fa3-ac31-591d491ebb18!
	I0328 03:34:33.364822       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8b01c804-6b13-4481-a11f-8e2ea3705bd3", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-340351_ccb5f49e-f2ee-4fa3-ac31-591d491ebb18 became leader
	I0328 03:34:33.460812       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-340351_ccb5f49e-f2ee-4fa3-ac31-591d491ebb18!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-340351 -n addons-340351
helpers_test.go:261: (dbg) Run:  kubectl --context addons-340351 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (37.18s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (8.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-79w62" [78970774-a1e5-4c9d-890b-695206d6cc40] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004334282s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-340351
addons_test.go:860: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable cloud-spanner -p addons-340351: exit status 11 (627.28441ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-28T03:37:07Z" level=error msg="stat /run/containerd/runc/k8s.io/8c2516b0e3a303672c290a8c014caabdd5c12fcce3a08854b65a79771150433d: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:861: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 addons disable cloud-spanner -p addons-340351" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-340351
helpers_test.go:235: (dbg) docker inspect addons-340351:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ddc972291ebed3b902f4d2fcb4f8dc9fed404aca7e445bfe9ec20bbc4f89be41",
	        "Created": "2024-03-28T03:33:50.879043322Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3256679,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-28T03:33:51.163351351Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d0f05b8b802e4c4af20a90d686bad8329f07849a8fda1b1d1c1dc3f527691df0",
	        "ResolvConfPath": "/var/lib/docker/containers/ddc972291ebed3b902f4d2fcb4f8dc9fed404aca7e445bfe9ec20bbc4f89be41/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ddc972291ebed3b902f4d2fcb4f8dc9fed404aca7e445bfe9ec20bbc4f89be41/hostname",
	        "HostsPath": "/var/lib/docker/containers/ddc972291ebed3b902f4d2fcb4f8dc9fed404aca7e445bfe9ec20bbc4f89be41/hosts",
	        "LogPath": "/var/lib/docker/containers/ddc972291ebed3b902f4d2fcb4f8dc9fed404aca7e445bfe9ec20bbc4f89be41/ddc972291ebed3b902f4d2fcb4f8dc9fed404aca7e445bfe9ec20bbc4f89be41-json.log",
	        "Name": "/addons-340351",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-340351:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-340351",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c6aac8c5b32311aeabba5bfe0fe045e961dba56c323d33612bfa94a7e36f714c-init/diff:/var/lib/docker/overlay2/30131fd39d8244f5536f8ed96d2d3a8ceec5075331a54f31974379c0fc24022e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c6aac8c5b32311aeabba5bfe0fe045e961dba56c323d33612bfa94a7e36f714c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c6aac8c5b32311aeabba5bfe0fe045e961dba56c323d33612bfa94a7e36f714c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c6aac8c5b32311aeabba5bfe0fe045e961dba56c323d33612bfa94a7e36f714c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-340351",
	                "Source": "/var/lib/docker/volumes/addons-340351/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-340351",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-340351",
	                "name.minikube.sigs.k8s.io": "addons-340351",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "362485c338d265f18b8f246e3c57edfa50d55f03fc51b2a73cb5710c90028c7f",
	            "SandboxKey": "/var/run/docker/netns/362485c338d2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36229"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36228"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36225"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36227"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36226"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-340351": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "d8f795ce1dbe7578c09d0062e640e183397fd16ae79c8ee0ea01c72d97c71ebb",
	                    "EndpointID": "f232e14030a89bf3efb8b49fe48cfe4d27bd34f25a8ef79c485ffd39359923f6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-340351",
	                        "ddc972291ebe"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-340351 -n addons-340351
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-340351 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-340351 logs -n 25: (1.493451685s)
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-831467   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC |                     |
	|         | -p download-only-831467              |                        |         |                |                     |                     |
	|         | --force --alsologtostderr            |                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0  |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| delete  | -p download-only-831467              | download-only-831467   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| delete  | -p download-only-417144              | download-only-417144   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| delete  | -p download-only-613150              | download-only-613150   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| delete  | -p download-only-831467              | download-only-831467   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| start   | --download-only -p                   | download-docker-513448 | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC |                     |
	|         | download-docker-513448               |                        |         |                |                     |                     |
	|         | --alsologtostderr                    |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | -p download-docker-513448            | download-docker-513448 | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| start   | --download-only -p                   | binary-mirror-112527   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC |                     |
	|         | binary-mirror-112527                 |                        |         |                |                     |                     |
	|         | --alsologtostderr                    |                        |         |                |                     |                     |
	|         | --binary-mirror                      |                        |         |                |                     |                     |
	|         | http://127.0.0.1:33653               |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	| delete  | -p binary-mirror-112527              | binary-mirror-112527   | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| addons  | enable dashboard -p                  | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC |                     |
	|         | addons-340351                        |                        |         |                |                     |                     |
	| addons  | disable dashboard -p                 | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC |                     |
	|         | addons-340351                        |                        |         |                |                     |                     |
	| start   | -p addons-340351 --wait=true         | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:35 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |                |                     |                     |
	|         | --addons=registry                    |                        |         |                |                     |                     |
	|         | --addons=metrics-server              |                        |         |                |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |                |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |                |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |                |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |                |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |                |                     |                     |
	|         | --container-runtime=containerd       |                        |         |                |                     |                     |
	|         | --addons=ingress                     |                        |         |                |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |                |                     |                     |
	| ip      | addons-340351 ip                     | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:35 UTC | 28 Mar 24 03:35 UTC |
	| addons  | addons-340351 addons disable         | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:35 UTC | 28 Mar 24 03:35 UTC |
	|         | registry --alsologtostderr           |                        |         |                |                     |                     |
	|         | -v=1                                 |                        |         |                |                     |                     |
	| addons  | addons-340351 addons                 | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:36 UTC | 28 Mar 24 03:36 UTC |
	|         | disable metrics-server               |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:36 UTC | 28 Mar 24 03:36 UTC |
	|         | addons-340351                        |                        |         |                |                     |                     |
	| ssh     | addons-340351 ssh curl -s            | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:36 UTC | 28 Mar 24 03:36 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |                |                     |                     |
	|         | nginx.example.com'                   |                        |         |                |                     |                     |
	| ip      | addons-340351 ip                     | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:36 UTC | 28 Mar 24 03:36 UTC |
	| addons  | addons-340351 addons disable         | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:36 UTC | 28 Mar 24 03:36 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |                |                     |                     |
	|         | -v=1                                 |                        |         |                |                     |                     |
	| addons  | addons-340351 addons disable         | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:36 UTC | 28 Mar 24 03:36 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |                |                     |                     |
	| addons  | addons-340351 addons                 | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:36 UTC | 28 Mar 24 03:36 UTC |
	|         | disable csi-hostpath-driver          |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |                |                     |                     |
	| addons  | addons-340351 addons                 | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:36 UTC | 28 Mar 24 03:36 UTC |
	|         | disable volumesnapshots              |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:37 UTC | 28 Mar 24 03:37 UTC |
	|         | -p addons-340351                     |                        |         |                |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-340351          | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:37 UTC |                     |
	|         | addons-340351                        |                        |         |                |                     |                     |
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 03:33:26
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 03:33:26.508915 3256235 out.go:291] Setting OutFile to fd 1 ...
	I0328 03:33:26.509097 3256235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:33:26.509110 3256235 out.go:304] Setting ErrFile to fd 2...
	I0328 03:33:26.509116 3256235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:33:26.509382 3256235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
	I0328 03:33:26.509885 3256235 out.go:298] Setting JSON to false
	I0328 03:33:26.510761 3256235 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":40544,"bootTime":1711556262,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0328 03:33:26.510831 3256235 start.go:139] virtualization:  
	I0328 03:33:26.547293 3256235 out.go:177] * [addons-340351] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 03:33:26.579004 3256235 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 03:33:26.579116 3256235 notify.go:220] Checking for updates...
	I0328 03:33:26.611212 3256235 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 03:33:26.650488 3256235 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 03:33:26.676013 3256235 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	I0328 03:33:26.708239 3256235 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 03:33:26.740501 3256235 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 03:33:26.772704 3256235 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 03:33:26.791323 3256235 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 03:33:26.791436 3256235 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 03:33:26.844367 3256235 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-28 03:33:26.833215235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 03:33:26.844503 3256235 docker.go:295] overlay module found
	I0328 03:33:26.867828 3256235 out.go:177] * Using the docker driver based on user configuration
	I0328 03:33:26.901058 3256235 start.go:297] selected driver: docker
	I0328 03:33:26.901086 3256235 start.go:901] validating driver "docker" against <nil>
	I0328 03:33:26.901100 3256235 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 03:33:26.901781 3256235 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 03:33:26.971375 3256235 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-28 03:33:26.962637936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 03:33:26.971549 3256235 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 03:33:26.971783 3256235 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 03:33:26.996147 3256235 out.go:177] * Using Docker driver with root privileges
	I0328 03:33:27.027928 3256235 cni.go:84] Creating CNI manager for ""
	I0328 03:33:27.027959 3256235 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 03:33:27.027970 3256235 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 03:33:27.028058 3256235 start.go:340] cluster config:
	{Name:addons-340351 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-340351 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 03:33:27.060847 3256235 out.go:177] * Starting "addons-340351" primary control-plane node in "addons-340351" cluster
	I0328 03:33:27.095630 3256235 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0328 03:33:27.121614 3256235 out.go:177] * Pulling base image v0.0.43-1711559786-18485 ...
	I0328 03:33:27.156251 3256235 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0328 03:33:27.156345 3256235 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0328 03:33:27.156358 3256235 cache.go:56] Caching tarball of preloaded images
	I0328 03:33:27.156259 3256235 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0328 03:33:27.156456 3256235 preload.go:173] Found /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 03:33:27.156473 3256235 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0328 03:33:27.157435 3256235 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/config.json ...
	I0328 03:33:27.157477 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/config.json: {Name:mkf325209a5b5c2613c0ed0e32acef3e8e2ab51a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:27.170640 3256235 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0328 03:33:27.170772 3256235 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory
	I0328 03:33:27.170797 3256235 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory, skipping pull
	I0328 03:33:27.170802 3256235 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in cache, skipping pull
	I0328 03:33:27.170811 3256235 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 as a tarball
	I0328 03:33:27.170822 3256235 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 from local cache
	I0328 03:33:43.388963 3256235 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 from cached tarball
	I0328 03:33:43.389004 3256235 cache.go:194] Successfully downloaded all kic artifacts
	I0328 03:33:43.389038 3256235 start.go:360] acquireMachinesLock for addons-340351: {Name:mk22ef88c2f18fdd8b3efd921e57718dafc59b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 03:33:43.389166 3256235 start.go:364] duration metric: took 104.85µs to acquireMachinesLock for "addons-340351"
	I0328 03:33:43.389205 3256235 start.go:93] Provisioning new machine with config: &{Name:addons-340351 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-340351 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0328 03:33:43.389284 3256235 start.go:125] createHost starting for "" (driver="docker")
	I0328 03:33:43.391815 3256235 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0328 03:33:43.392046 3256235 start.go:159] libmachine.API.Create for "addons-340351" (driver="docker")
	I0328 03:33:43.392081 3256235 client.go:168] LocalClient.Create starting
	I0328 03:33:43.392180 3256235 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem
	I0328 03:33:43.919443 3256235 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/cert.pem
	I0328 03:33:44.422231 3256235 cli_runner.go:164] Run: docker network inspect addons-340351 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0328 03:33:44.440692 3256235 cli_runner.go:211] docker network inspect addons-340351 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0328 03:33:44.440791 3256235 network_create.go:281] running [docker network inspect addons-340351] to gather additional debugging logs...
	I0328 03:33:44.440811 3256235 cli_runner.go:164] Run: docker network inspect addons-340351
	W0328 03:33:44.458186 3256235 cli_runner.go:211] docker network inspect addons-340351 returned with exit code 1
	I0328 03:33:44.458219 3256235 network_create.go:284] error running [docker network inspect addons-340351]: docker network inspect addons-340351: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-340351 not found
	I0328 03:33:44.458232 3256235 network_create.go:286] output of [docker network inspect addons-340351]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-340351 not found
	
	** /stderr **
	I0328 03:33:44.458356 3256235 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0328 03:33:44.474744 3256235 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400252b1c0}
	I0328 03:33:44.474789 3256235 network_create.go:124] attempt to create docker network addons-340351 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0328 03:33:44.474849 3256235 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-340351 addons-340351
	I0328 03:33:44.543201 3256235 network_create.go:108] docker network addons-340351 192.168.49.0/24 created
	I0328 03:33:44.543236 3256235 kic.go:121] calculated static IP "192.168.49.2" for the "addons-340351" container
	I0328 03:33:44.543327 3256235 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0328 03:33:44.556989 3256235 cli_runner.go:164] Run: docker volume create addons-340351 --label name.minikube.sigs.k8s.io=addons-340351 --label created_by.minikube.sigs.k8s.io=true
	I0328 03:33:44.572234 3256235 oci.go:103] Successfully created a docker volume addons-340351
	I0328 03:33:44.572347 3256235 cli_runner.go:164] Run: docker run --rm --name addons-340351-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-340351 --entrypoint /usr/bin/test -v addons-340351:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -d /var/lib
	I0328 03:33:46.532617 3256235 cli_runner.go:217] Completed: docker run --rm --name addons-340351-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-340351 --entrypoint /usr/bin/test -v addons-340351:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -d /var/lib: (1.960208397s)
	I0328 03:33:46.532648 3256235 oci.go:107] Successfully prepared a docker volume addons-340351
	I0328 03:33:46.532694 3256235 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0328 03:33:46.532716 3256235 kic.go:194] Starting extracting preloaded images to volume ...
	I0328 03:33:46.532800 3256235 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-340351:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -I lz4 -xf /preloaded.tar -C /extractDir
	I0328 03:33:50.817093 3256235 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-340351:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -I lz4 -xf /preloaded.tar -C /extractDir: (4.284253306s)
	I0328 03:33:50.817129 3256235 kic.go:203] duration metric: took 4.284409667s to extract preloaded images to volume ...
	W0328 03:33:50.817273 3256235 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0328 03:33:50.817391 3256235 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0328 03:33:50.865792 3256235 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-340351 --name addons-340351 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-340351 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-340351 --network addons-340351 --ip 192.168.49.2 --volume addons-340351:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82
	I0328 03:33:51.172683 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Running}}
	I0328 03:33:51.190960 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:33:51.210326 3256235 cli_runner.go:164] Run: docker exec addons-340351 stat /var/lib/dpkg/alternatives/iptables
	I0328 03:33:51.281258 3256235 oci.go:144] the created container "addons-340351" has a running status.
	I0328 03:33:51.281285 3256235 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa...
	I0328 03:33:51.664417 3256235 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0328 03:33:51.689443 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:33:51.707834 3256235 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0328 03:33:51.707857 3256235 kic_runner.go:114] Args: [docker exec --privileged addons-340351 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0328 03:33:51.780134 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:33:51.799495 3256235 machine.go:94] provisionDockerMachine start ...
	I0328 03:33:51.799744 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:51.823520 3256235 main.go:141] libmachine: Using SSH client type: native
	I0328 03:33:51.823781 3256235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36229 <nil> <nil>}
	I0328 03:33:51.823790 3256235 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 03:33:51.991749 3256235 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-340351
	
	I0328 03:33:51.991798 3256235 ubuntu.go:169] provisioning hostname "addons-340351"
	I0328 03:33:51.991882 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:52.014822 3256235 main.go:141] libmachine: Using SSH client type: native
	I0328 03:33:52.015073 3256235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36229 <nil> <nil>}
	I0328 03:33:52.015085 3256235 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-340351 && echo "addons-340351" | sudo tee /etc/hostname
	I0328 03:33:52.178786 3256235 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-340351
	
	I0328 03:33:52.178933 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:52.197272 3256235 main.go:141] libmachine: Using SSH client type: native
	I0328 03:33:52.197507 3256235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36229 <nil> <nil>}
	I0328 03:33:52.197523 3256235 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-340351' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-340351/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-340351' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 03:33:52.336340 3256235 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 03:33:52.336366 3256235 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18485-3249988/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-3249988/.minikube}
	I0328 03:33:52.336386 3256235 ubuntu.go:177] setting up certificates
	I0328 03:33:52.336398 3256235 provision.go:84] configureAuth start
	I0328 03:33:52.336483 3256235 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-340351
	I0328 03:33:52.352542 3256235 provision.go:143] copyHostCerts
	I0328 03:33:52.352633 3256235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-3249988/.minikube/cert.pem (1123 bytes)
	I0328 03:33:52.352770 3256235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-3249988/.minikube/key.pem (1675 bytes)
	I0328 03:33:52.352850 3256235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.pem (1078 bytes)
	I0328 03:33:52.352913 3256235 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca-key.pem org=jenkins.addons-340351 san=[127.0.0.1 192.168.49.2 addons-340351 localhost minikube]
	I0328 03:33:52.845534 3256235 provision.go:177] copyRemoteCerts
	I0328 03:33:52.845612 3256235 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 03:33:52.845653 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:52.860119 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:33:52.956812 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 03:33:52.979822 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0328 03:33:53.004380 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 03:33:53.029668 3256235 provision.go:87] duration metric: took 693.252709ms to configureAuth
	I0328 03:33:53.029694 3256235 ubuntu.go:193] setting minikube options for container-runtime
	I0328 03:33:53.029885 3256235 config.go:182] Loaded profile config "addons-340351": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 03:33:53.029902 3256235 machine.go:97] duration metric: took 1.230237745s to provisionDockerMachine
	I0328 03:33:53.029910 3256235 client.go:171] duration metric: took 9.637821806s to LocalClient.Create
	I0328 03:33:53.029924 3256235 start.go:167] duration metric: took 9.637877846s to libmachine.API.Create "addons-340351"
	I0328 03:33:53.029936 3256235 start.go:293] postStartSetup for "addons-340351" (driver="docker")
	I0328 03:33:53.029946 3256235 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 03:33:53.029998 3256235 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 03:33:53.030047 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:53.045455 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:33:53.141602 3256235 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 03:33:53.144806 3256235 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0328 03:33:53.144843 3256235 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0328 03:33:53.144855 3256235 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0328 03:33:53.144863 3256235 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0328 03:33:53.144872 3256235 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-3249988/.minikube/addons for local assets ...
	I0328 03:33:53.144942 3256235 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-3249988/.minikube/files for local assets ...
	I0328 03:33:53.144970 3256235 start.go:296] duration metric: took 115.02837ms for postStartSetup
	I0328 03:33:53.145278 3256235 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-340351
	I0328 03:33:53.159955 3256235 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/config.json ...
	I0328 03:33:53.160253 3256235 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 03:33:53.160299 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:53.175592 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:33:53.269005 3256235 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0328 03:33:53.273143 3256235 start.go:128] duration metric: took 9.883842827s to createHost
	I0328 03:33:53.273165 3256235 start.go:83] releasing machines lock for "addons-340351", held for 9.883987382s
	I0328 03:33:53.273248 3256235 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-340351
	I0328 03:33:53.287488 3256235 ssh_runner.go:195] Run: cat /version.json
	I0328 03:33:53.287540 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:53.287819 3256235 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 03:33:53.287879 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:33:53.311310 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:33:53.312404 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:33:53.404060 3256235 ssh_runner.go:195] Run: systemctl --version
	I0328 03:33:53.521112 3256235 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 03:33:53.525457 3256235 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0328 03:33:53.551240 3256235 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0328 03:33:53.551320 3256235 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 03:33:53.581554 3256235 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0328 03:33:53.581578 3256235 start.go:494] detecting cgroup driver to use...
	I0328 03:33:53.581612 3256235 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0328 03:33:53.581674 3256235 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 03:33:53.594431 3256235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 03:33:53.606660 3256235 docker.go:217] disabling cri-docker service (if available) ...
	I0328 03:33:53.606724 3256235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 03:33:53.621011 3256235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 03:33:53.635728 3256235 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 03:33:53.732581 3256235 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 03:33:53.828098 3256235 docker.go:233] disabling docker service ...
	I0328 03:33:53.828190 3256235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 03:33:53.848054 3256235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 03:33:53.859878 3256235 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 03:33:53.955546 3256235 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 03:33:54.055598 3256235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 03:33:54.066921 3256235 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 03:33:54.083894 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0328 03:33:54.094366 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 03:33:54.105158 3256235 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 03:33:54.105280 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 03:33:54.115512 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 03:33:54.126007 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 03:33:54.136717 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 03:33:54.147177 3256235 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 03:33:54.156924 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 03:33:54.166649 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 03:33:54.177236 3256235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 03:33:54.188382 3256235 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 03:33:54.197034 3256235 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 03:33:54.205309 3256235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 03:33:54.304957 3256235 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 03:33:54.437461 3256235 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0328 03:33:54.437564 3256235 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0328 03:33:54.441000 3256235 start.go:562] Will wait 60s for crictl version
	I0328 03:33:54.441104 3256235 ssh_runner.go:195] Run: which crictl
	I0328 03:33:54.444290 3256235 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 03:33:54.479793 3256235 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0328 03:33:54.479944 3256235 ssh_runner.go:195] Run: containerd --version
	I0328 03:33:54.501763 3256235 ssh_runner.go:195] Run: containerd --version
	I0328 03:33:54.525311 3256235 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.6.28 ...
	I0328 03:33:54.526951 3256235 cli_runner.go:164] Run: docker network inspect addons-340351 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0328 03:33:54.539817 3256235 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0328 03:33:54.543213 3256235 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 03:33:54.553458 3256235 kubeadm.go:877] updating cluster {Name:addons-340351 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-340351 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 03:33:54.553579 3256235 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0328 03:33:54.553643 3256235 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 03:33:54.592751 3256235 containerd.go:627] all images are preloaded for containerd runtime.
	I0328 03:33:54.592778 3256235 containerd.go:534] Images already preloaded, skipping extraction
	I0328 03:33:54.592838 3256235 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 03:33:54.627335 3256235 containerd.go:627] all images are preloaded for containerd runtime.
	I0328 03:33:54.627357 3256235 cache_images.go:84] Images are preloaded, skipping loading
	I0328 03:33:54.627365 3256235 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.29.3 containerd true true} ...
	I0328 03:33:54.627465 3256235 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-340351 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-340351 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 03:33:54.627535 3256235 ssh_runner.go:195] Run: sudo crictl info
	I0328 03:33:54.668582 3256235 cni.go:84] Creating CNI manager for ""
	I0328 03:33:54.668608 3256235 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 03:33:54.668620 3256235 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 03:33:54.668650 3256235 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-340351 NodeName:addons-340351 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 03:33:54.668799 3256235 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-340351"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 03:33:54.668875 3256235 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 03:33:54.677577 3256235 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 03:33:54.677649 3256235 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 03:33:54.686204 3256235 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0328 03:33:54.704466 3256235 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 03:33:54.722444 3256235 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0328 03:33:54.740742 3256235 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0328 03:33:54.744495 3256235 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 03:33:54.755289 3256235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 03:33:54.845090 3256235 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 03:33:54.859574 3256235 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351 for IP: 192.168.49.2
	I0328 03:33:54.859648 3256235 certs.go:194] generating shared ca certs ...
	I0328 03:33:54.859678 3256235 certs.go:226] acquiring lock for ca certs: {Name:mk654727350d982ceeedd640f586ca1658e18559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:54.860517 3256235 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.key
	I0328 03:33:55.181825 3256235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.crt ...
	I0328 03:33:55.181865 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.crt: {Name:mk5797fdd0e7a871dd7cc8cb611c61502a1449b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:55.182804 3256235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.key ...
	I0328 03:33:55.182829 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.key: {Name:mk73aba0d6b2144be8203e586a02904016d466db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:55.183456 3256235 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.key
	I0328 03:33:55.425957 3256235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.crt ...
	I0328 03:33:55.425987 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.crt: {Name:mka02e0583616b5adccc14bc61748a76734feac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:55.426671 3256235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.key ...
	I0328 03:33:55.426688 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.key: {Name:mk128862586320a93862063d97690310c13a0509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:55.426782 3256235 certs.go:256] generating profile certs ...
	I0328 03:33:55.426854 3256235 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.key
	I0328 03:33:55.426878 3256235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt with IP's: []
	I0328 03:33:55.683495 3256235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt ...
	I0328 03:33:55.683526 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: {Name:mk0de8ba36d63ea102ad10d44d2bbf1c3143896f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:55.684495 3256235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.key ...
	I0328 03:33:55.684513 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.key: {Name:mk7e857fb1a9b5e19f2991afb08cc3f69c4a8183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:55.684607 3256235 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.key.84f93638
	I0328 03:33:55.684626 3256235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.crt.84f93638 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0328 03:33:56.042126 3256235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.crt.84f93638 ...
	I0328 03:33:56.042158 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.crt.84f93638: {Name:mk5a79ffee9a46ad2bef3c07aab9d891fd17073c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:56.043112 3256235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.key.84f93638 ...
	I0328 03:33:56.043132 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.key.84f93638: {Name:mked53764170661be89b46c5c68a0ab80bd6eeca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:56.043709 3256235 certs.go:381] copying /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.crt.84f93638 -> /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.crt
	I0328 03:33:56.043814 3256235 certs.go:385] copying /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.key.84f93638 -> /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.key
	I0328 03:33:56.043870 3256235 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.key
	I0328 03:33:56.043893 3256235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.crt with IP's: []
	I0328 03:33:56.304600 3256235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.crt ...
	I0328 03:33:56.304630 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.crt: {Name:mk393d71b89fbf8ff165f3c34812a846149bf605 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:56.305297 3256235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.key ...
	I0328 03:33:56.305315 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.key: {Name:mka6238dc4cca2c10224cdd08ce3ef020bb67f44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:56.305509 3256235 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca-key.pem (1679 bytes)
	I0328 03:33:56.305558 3256235 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem (1078 bytes)
	I0328 03:33:56.305587 3256235 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/cert.pem (1123 bytes)
	I0328 03:33:56.305621 3256235 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/key.pem (1675 bytes)
	I0328 03:33:56.306290 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 03:33:56.335502 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 03:33:56.358931 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 03:33:56.384113 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 03:33:56.409524 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0328 03:33:56.434079 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 03:33:56.458484 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 03:33:56.482962 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 03:33:56.507666 3256235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 03:33:56.532003 3256235 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 03:33:56.550386 3256235 ssh_runner.go:195] Run: openssl version
	I0328 03:33:56.555819 3256235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 03:33:56.565408 3256235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 03:33:56.568709 3256235 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 28 03:33 /usr/share/ca-certificates/minikubeCA.pem
	I0328 03:33:56.568780 3256235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 03:33:56.575621 3256235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 03:33:56.584986 3256235 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 03:33:56.588103 3256235 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 03:33:56.588159 3256235 kubeadm.go:391] StartCluster: {Name:addons-340351 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-340351 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 03:33:56.588248 3256235 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0328 03:33:56.588343 3256235 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 03:33:56.631928 3256235 cri.go:89] found id: ""
	I0328 03:33:56.631996 3256235 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0328 03:33:56.640719 3256235 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 03:33:56.649299 3256235 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0328 03:33:56.649396 3256235 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 03:33:56.659849 3256235 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 03:33:56.659869 3256235 kubeadm.go:156] found existing configuration files:
	
	I0328 03:33:56.659921 3256235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 03:33:56.668308 3256235 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 03:33:56.668413 3256235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 03:33:56.676710 3256235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 03:33:56.685351 3256235 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 03:33:56.685425 3256235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 03:33:56.693577 3256235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 03:33:56.702370 3256235 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 03:33:56.702457 3256235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 03:33:56.710539 3256235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 03:33:56.719012 3256235 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 03:33:56.719106 3256235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 03:33:56.727307 3256235 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0328 03:33:56.818234 3256235 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1056-aws\n", err: exit status 1
	I0328 03:33:56.886479 3256235 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 03:34:14.047702 3256235 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 03:34:14.047774 3256235 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 03:34:14.047859 3256235 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0328 03:34:14.047925 3256235 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1056-aws
	I0328 03:34:14.047958 3256235 kubeadm.go:309] OS: Linux
	I0328 03:34:14.048001 3256235 kubeadm.go:309] CGROUPS_CPU: enabled
	I0328 03:34:14.048062 3256235 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0328 03:34:14.048109 3256235 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0328 03:34:14.048155 3256235 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0328 03:34:14.048201 3256235 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0328 03:34:14.048249 3256235 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0328 03:34:14.048292 3256235 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0328 03:34:14.048345 3256235 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0328 03:34:14.048390 3256235 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0328 03:34:14.048466 3256235 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 03:34:14.048556 3256235 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 03:34:14.048643 3256235 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 03:34:14.048702 3256235 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 03:34:14.050808 3256235 out.go:204]   - Generating certificates and keys ...
	I0328 03:34:14.050902 3256235 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 03:34:14.050965 3256235 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 03:34:14.051028 3256235 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0328 03:34:14.051086 3256235 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0328 03:34:14.051149 3256235 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0328 03:34:14.051197 3256235 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0328 03:34:14.051248 3256235 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0328 03:34:14.051361 3256235 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-340351 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0328 03:34:14.051411 3256235 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0328 03:34:14.051519 3256235 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-340351 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0328 03:34:14.051581 3256235 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0328 03:34:14.051641 3256235 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0328 03:34:14.051683 3256235 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0328 03:34:14.051736 3256235 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 03:34:14.051785 3256235 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 03:34:14.051839 3256235 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 03:34:14.051890 3256235 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 03:34:14.051950 3256235 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 03:34:14.052001 3256235 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 03:34:14.052078 3256235 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 03:34:14.052141 3256235 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 03:34:14.054318 3256235 out.go:204]   - Booting up control plane ...
	I0328 03:34:14.054516 3256235 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 03:34:14.054651 3256235 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 03:34:14.054768 3256235 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 03:34:14.054915 3256235 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 03:34:14.055031 3256235 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 03:34:14.055075 3256235 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 03:34:14.055235 3256235 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 03:34:14.055314 3256235 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.502594 seconds
	I0328 03:34:14.055424 3256235 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 03:34:14.055553 3256235 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 03:34:14.055613 3256235 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 03:34:14.055801 3256235 kubeadm.go:309] [mark-control-plane] Marking the node addons-340351 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 03:34:14.055858 3256235 kubeadm.go:309] [bootstrap-token] Using token: pmfpfc.jnx6gyasxx7a8lz7
	I0328 03:34:14.057824 3256235 out.go:204]   - Configuring RBAC rules ...
	I0328 03:34:14.057942 3256235 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 03:34:14.058036 3256235 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 03:34:14.058181 3256235 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 03:34:14.058325 3256235 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 03:34:14.058450 3256235 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 03:34:14.058545 3256235 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 03:34:14.058662 3256235 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 03:34:14.058706 3256235 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 03:34:14.058753 3256235 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 03:34:14.058757 3256235 kubeadm.go:309] 
	I0328 03:34:14.058819 3256235 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 03:34:14.058824 3256235 kubeadm.go:309] 
	I0328 03:34:14.058903 3256235 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 03:34:14.058907 3256235 kubeadm.go:309] 
	I0328 03:34:14.058933 3256235 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 03:34:14.058993 3256235 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 03:34:14.059045 3256235 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 03:34:14.059049 3256235 kubeadm.go:309] 
	I0328 03:34:14.059104 3256235 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 03:34:14.059108 3256235 kubeadm.go:309] 
	I0328 03:34:14.059157 3256235 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 03:34:14.059161 3256235 kubeadm.go:309] 
	I0328 03:34:14.059215 3256235 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 03:34:14.059293 3256235 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 03:34:14.059363 3256235 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 03:34:14.059367 3256235 kubeadm.go:309] 
	I0328 03:34:14.059454 3256235 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 03:34:14.059532 3256235 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 03:34:14.059537 3256235 kubeadm.go:309] 
	I0328 03:34:14.059623 3256235 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token pmfpfc.jnx6gyasxx7a8lz7 \
	I0328 03:34:14.059730 3256235 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:997a843960cc5d2f806bfa4fc2e7f3f771ce9ed1a8f2f9b600560484642e5094 \
	I0328 03:34:14.059751 3256235 kubeadm.go:309] 	--control-plane 
	I0328 03:34:14.059755 3256235 kubeadm.go:309] 
	I0328 03:34:14.059842 3256235 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 03:34:14.059846 3256235 kubeadm.go:309] 
	I0328 03:34:14.059931 3256235 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token pmfpfc.jnx6gyasxx7a8lz7 \
	I0328 03:34:14.060049 3256235 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:997a843960cc5d2f806bfa4fc2e7f3f771ce9ed1a8f2f9b600560484642e5094 
	I0328 03:34:14.060057 3256235 cni.go:84] Creating CNI manager for ""
	I0328 03:34:14.060064 3256235 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 03:34:14.062069 3256235 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0328 03:34:14.063851 3256235 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0328 03:34:14.068630 3256235 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0328 03:34:14.068686 3256235 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0328 03:34:14.113151 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0328 03:34:14.433874 3256235 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 03:34:14.434033 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:14.434115 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-340351 minikube.k8s.io/updated_at=2024_03_28T03_34_14_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=addons-340351 minikube.k8s.io/primary=true
	I0328 03:34:14.582691 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:14.582748 3256235 ops.go:34] apiserver oom_adj: -16
	I0328 03:34:15.083536 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:15.583428 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:16.082828 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:16.583668 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:17.083576 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:17.583802 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:18.083351 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:18.583178 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:19.083500 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:19.583787 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:20.083484 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:20.582827 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:21.083412 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:21.582989 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:22.083736 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:22.583161 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:23.083235 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:23.583081 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:24.082807 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:24.583066 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:25.083689 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:25.582830 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:26.083130 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:26.583005 3256235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 03:34:26.674317 3256235 kubeadm.go:1107] duration metric: took 12.240342244s to wait for elevateKubeSystemPrivileges
	W0328 03:34:26.674368 3256235 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 03:34:26.674378 3256235 kubeadm.go:393] duration metric: took 30.086223096s to StartCluster
	I0328 03:34:26.674394 3256235 settings.go:142] acquiring lock: {Name:mkc9f345268bcac5ebc4aa579f709fe3221112b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:34:26.674944 3256235 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 03:34:26.675353 3256235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/kubeconfig: {Name:mkf778b20fa7ee9827f7d3539ae3fbccd66af6b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:34:26.676109 3256235 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0328 03:34:26.676142 3256235 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0328 03:34:26.678410 3256235 out.go:177] * Verifying Kubernetes components...
	I0328 03:34:26.676501 3256235 config.go:182] Loaded profile config "addons-340351": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 03:34:26.676513 3256235 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0328 03:34:26.680244 3256235 addons.go:69] Setting yakd=true in profile "addons-340351"
	I0328 03:34:26.680278 3256235 addons.go:234] Setting addon yakd=true in "addons-340351"
	I0328 03:34:26.680308 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.680846 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.681028 3256235 addons.go:69] Setting ingress-dns=true in profile "addons-340351"
	I0328 03:34:26.681052 3256235 addons.go:234] Setting addon ingress-dns=true in "addons-340351"
	I0328 03:34:26.681087 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.681456 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.681889 3256235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 03:34:26.682068 3256235 addons.go:69] Setting inspektor-gadget=true in profile "addons-340351"
	I0328 03:34:26.682096 3256235 addons.go:234] Setting addon inspektor-gadget=true in "addons-340351"
	I0328 03:34:26.682130 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.682495 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.682750 3256235 addons.go:69] Setting cloud-spanner=true in profile "addons-340351"
	I0328 03:34:26.682799 3256235 addons.go:234] Setting addon cloud-spanner=true in "addons-340351"
	I0328 03:34:26.682840 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.683302 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.684771 3256235 addons.go:69] Setting metrics-server=true in profile "addons-340351"
	I0328 03:34:26.684807 3256235 addons.go:234] Setting addon metrics-server=true in "addons-340351"
	I0328 03:34:26.684841 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.685222 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.691650 3256235 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-340351"
	I0328 03:34:26.691748 3256235 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-340351"
	I0328 03:34:26.691886 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.692383 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.700464 3256235 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-340351"
	I0328 03:34:26.705661 3256235 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-340351"
	I0328 03:34:26.701811 3256235 addons.go:69] Setting default-storageclass=true in profile "addons-340351"
	I0328 03:34:26.701829 3256235 addons.go:69] Setting gcp-auth=true in profile "addons-340351"
	I0328 03:34:26.701840 3256235 addons.go:69] Setting ingress=true in profile "addons-340351"
	I0328 03:34:26.702410 3256235 addons.go:69] Setting registry=true in profile "addons-340351"
	I0328 03:34:26.702424 3256235 addons.go:69] Setting storage-provisioner=true in profile "addons-340351"
	I0328 03:34:26.702430 3256235 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-340351"
	I0328 03:34:26.702439 3256235 addons.go:69] Setting volumesnapshots=true in profile "addons-340351"
	I0328 03:34:26.708918 3256235 addons.go:234] Setting addon volumesnapshots=true in "addons-340351"
	I0328 03:34:26.709074 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.714209 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.715733 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.716207 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.717305 3256235 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-340351"
	I0328 03:34:26.718243 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.717339 3256235 mustload.go:65] Loading cluster: addons-340351
	I0328 03:34:26.733171 3256235 config.go:182] Loaded profile config "addons-340351": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 03:34:26.733520 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.717353 3256235 addons.go:234] Setting addon ingress=true in "addons-340351"
	I0328 03:34:26.717364 3256235 addons.go:234] Setting addon registry=true in "addons-340351"
	I0328 03:34:26.762418 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.717389 3256235 addons.go:234] Setting addon storage-provisioner=true in "addons-340351"
	I0328 03:34:26.770411 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.770724 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.771132 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.717401 3256235 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-340351"
	I0328 03:34:26.786899 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.770125 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.796811 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.813710 3256235 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0328 03:34:26.829180 3256235 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0328 03:34:26.829259 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0328 03:34:26.829361 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:26.851741 3256235 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0328 03:34:26.853678 3256235 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0328 03:34:26.855486 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0328 03:34:26.855506 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0328 03:34:26.855582 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:26.853832 3256235 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0328 03:34:26.857681 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0328 03:34:26.857769 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:26.853838 3256235 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0328 03:34:26.893273 3256235 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0328 03:34:26.893297 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0328 03:34:26.893364 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:26.913701 3256235 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0328 03:34:26.919425 3256235 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0328 03:34:26.919454 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0328 03:34:26.919543 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:26.968271 3256235 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0328 03:34:26.966383 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.967881 3256235 addons.go:234] Setting addon default-storageclass=true in "addons-340351"
	I0328 03:34:26.970521 3256235 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 03:34:26.970548 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 03:34:26.970617 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:26.968561 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:26.976269 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:26.990553 3256235 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 03:34:26.992586 3256235 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 03:34:26.992611 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 03:34:26.992684 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:27.080405 3256235 out.go:177]   - Using image docker.io/registry:2.8.3
	I0328 03:34:27.054031 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.054086 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.054542 3256235 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-340351"
	I0328 03:34:27.055444 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.087568 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0328 03:34:27.088217 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:27.089210 3256235 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0328 03:34:27.090906 3256235 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0328 03:34:27.090926 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0328 03:34:27.090982 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:27.089205 3256235 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0328 03:34:27.089198 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0328 03:34:27.089695 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:27.091480 3256235 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 03:34:27.092991 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 03:34:27.093082 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:27.094593 3256235 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0328 03:34:27.094617 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0328 03:34:27.094674 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:27.105519 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0328 03:34:27.112119 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0328 03:34:27.113999 3256235 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0328 03:34:27.120507 3256235 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0328 03:34:27.122098 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0328 03:34:27.123579 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0328 03:34:27.122377 3256235 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0328 03:34:27.126486 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0328 03:34:27.126560 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:27.128767 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0328 03:34:27.152799 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0328 03:34:27.158310 3256235 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0328 03:34:27.160010 3256235 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0328 03:34:27.160031 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0328 03:34:27.160092 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:27.158405 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.166197 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.241954 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.256418 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.265126 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.276771 3256235 out.go:177]   - Using image docker.io/busybox:stable
	I0328 03:34:27.270873 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.270928 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.282110 3256235 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0328 03:34:27.279422 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.281655 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.284301 3256235 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0328 03:34:27.284316 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0328 03:34:27.284485 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	W0328 03:34:27.298770 3256235 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0328 03:34:27.298801 3256235 retry.go:31] will retry after 263.276035ms: ssh: handshake failed: EOF
	I0328 03:34:27.308171 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:27.713514 3256235 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.037370238s)
	I0328 03:34:27.713773 3256235 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0328 03:34:27.713881 3256235 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.031976114s)
	I0328 03:34:27.713981 3256235 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 03:34:27.770759 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0328 03:34:27.770785 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0328 03:34:27.926932 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 03:34:27.999153 3256235 node_ready.go:35] waiting up to 6m0s for node "addons-340351" to be "Ready" ...
	I0328 03:34:28.008788 3256235 node_ready.go:49] node "addons-340351" has status "Ready":"True"
	I0328 03:34:28.008869 3256235 node_ready.go:38] duration metric: took 9.563492ms for node "addons-340351" to be "Ready" ...
	I0328 03:34:28.008904 3256235 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 03:34:28.009533 3256235 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0328 03:34:28.009607 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0328 03:34:28.030833 3256235 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4t7xb" in "kube-system" namespace to be "Ready" ...
	I0328 03:34:28.053843 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 03:34:28.058697 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0328 03:34:28.059989 3256235 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0328 03:34:28.060069 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0328 03:34:28.083522 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0328 03:34:28.114751 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0328 03:34:28.114826 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0328 03:34:28.147109 3256235 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0328 03:34:28.147213 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0328 03:34:28.154450 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0328 03:34:28.171344 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0328 03:34:28.184104 3256235 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0328 03:34:28.184128 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0328 03:34:28.225581 3256235 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0328 03:34:28.225614 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0328 03:34:28.241321 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0328 03:34:28.296159 3256235 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 03:34:28.296185 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0328 03:34:28.340474 3256235 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0328 03:34:28.340501 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0328 03:34:28.379964 3256235 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0328 03:34:28.379986 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0328 03:34:28.401857 3256235 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0328 03:34:28.401887 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0328 03:34:28.409427 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0328 03:34:28.409454 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0328 03:34:28.512777 3256235 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0328 03:34:28.512806 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0328 03:34:28.523003 3256235 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0328 03:34:28.523027 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0328 03:34:28.653310 3256235 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0328 03:34:28.653343 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0328 03:34:28.699865 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0328 03:34:28.711008 3256235 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 03:34:28.711082 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 03:34:28.753421 3256235 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0328 03:34:28.753496 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0328 03:34:28.756017 3256235 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0328 03:34:28.756100 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0328 03:34:28.787386 3256235 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 03:34:28.787463 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 03:34:28.807874 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0328 03:34:28.807952 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0328 03:34:28.829372 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0328 03:34:28.869188 3256235 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0328 03:34:28.869261 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0328 03:34:28.903085 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 03:34:28.921059 3256235 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0328 03:34:28.921136 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0328 03:34:28.925455 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0328 03:34:28.925527 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0328 03:34:29.013812 3256235 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0328 03:34:29.013886 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0328 03:34:29.067907 3256235 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0328 03:34:29.067979 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0328 03:34:29.071312 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0328 03:34:29.071384 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0328 03:34:29.259964 3256235 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0328 03:34:29.260037 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0328 03:34:29.322386 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0328 03:34:29.338736 3256235 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0328 03:34:29.338810 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0328 03:34:29.494902 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0328 03:34:29.574066 3256235 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0328 03:34:29.574141 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0328 03:34:29.580559 3256235 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.866732139s)
	I0328 03:34:29.580638 3256235 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0328 03:34:29.742971 3256235 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0328 03:34:29.743043 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0328 03:34:29.931892 3256235 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0328 03:34:29.931966 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0328 03:34:30.040102 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:30.064420 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0328 03:34:30.085813 3256235 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-340351" context rescaled to 1 replicas
	I0328 03:34:32.041698 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:32.058125 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.131156061s)
	I0328 03:34:32.058181 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.004238801s)
	I0328 03:34:32.058412 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.974818065s)
	I0328 03:34:32.058586 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.904050895s)
	I0328 03:34:32.058200 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.999424781s)
	I0328 03:34:32.058729 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.887289295s)
	W0328 03:34:32.077487 3256235 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0328 03:34:33.978200 3256235 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0328 03:34:33.978314 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:34.024807 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:34.315928 3256235 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0328 03:34:34.421015 3256235 addons.go:234] Setting addon gcp-auth=true in "addons-340351"
	I0328 03:34:34.421119 3256235 host.go:66] Checking if "addons-340351" exists ...
	I0328 03:34:34.421642 3256235 cli_runner.go:164] Run: docker container inspect addons-340351 --format={{.State.Status}}
	I0328 03:34:34.445088 3256235 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0328 03:34:34.445140 3256235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340351
	I0328 03:34:34.471458 3256235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36229 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/addons-340351/id_rsa Username:docker}
	I0328 03:34:34.534178 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.292766667s)
	I0328 03:34:34.534221 3256235 addons.go:470] Verifying addon ingress=true in "addons-340351"
	I0328 03:34:34.537520 3256235 out.go:177] * Verifying ingress addon...
	I0328 03:34:34.534483 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.834584745s)
	I0328 03:34:34.534529 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.705087273s)
	I0328 03:34:34.534581 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.631426932s)
	I0328 03:34:34.534663 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.212201882s)
	I0328 03:34:34.534715 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.039749266s)
	I0328 03:34:34.541178 3256235 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0328 03:34:34.543497 3256235 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-340351 service yakd-dashboard -n yakd-dashboard
	
	I0328 03:34:34.541536 3256235 addons.go:470] Verifying addon metrics-server=true in "addons-340351"
	I0328 03:34:34.541553 3256235 addons.go:470] Verifying addon registry=true in "addons-340351"
	W0328 03:34:34.541599 3256235 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0328 03:34:34.549454 3256235 out.go:177] * Verifying registry addon...
	I0328 03:34:34.543941 3256235 retry.go:31] will retry after 364.40068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0328 03:34:34.547439 3256235 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0328 03:34:34.547898 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:34.549654 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:34.553429 3256235 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0328 03:34:34.559949 3256235 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0328 03:34:34.560031 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:34.915081 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0328 03:34:35.047777 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:35.062488 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:35.585995 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:35.590613 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:35.700020 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.635492796s)
	I0328 03:34:35.700057 3256235 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-340351"
	I0328 03:34:35.702462 3256235 out.go:177] * Verifying csi-hostpath-driver addon...
	I0328 03:34:35.700267 3256235 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.255156795s)
	I0328 03:34:35.712899 3256235 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0328 03:34:35.710969 3256235 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0328 03:34:35.726275 3256235 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0328 03:34:35.724987 3256235 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0328 03:34:35.729266 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:35.729351 3256235 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0328 03:34:35.729367 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0328 03:34:35.800891 3256235 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0328 03:34:35.800920 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0328 03:34:35.825119 3256235 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0328 03:34:35.825145 3256235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0328 03:34:35.851051 3256235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0328 03:34:36.062814 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:36.066432 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:36.225257 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:36.545601 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:36.558533 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:36.614077 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.69895122s)
	I0328 03:34:36.725231 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:36.988061 3256235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.136966684s)
	I0328 03:34:36.990904 3256235 addons.go:470] Verifying addon gcp-auth=true in "addons-340351"
	I0328 03:34:36.994726 3256235 out.go:177] * Verifying gcp-auth addon...
	I0328 03:34:36.998267 3256235 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0328 03:34:37.006996 3256235 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0328 03:34:37.007019 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:37.047274 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:37.052298 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:37.059050 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:37.234383 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:37.502397 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:37.545786 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:37.558884 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:37.724950 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:38.003255 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:38.047160 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:38.060290 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:38.225188 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:38.502564 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:38.546475 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:38.558619 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:38.725199 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:39.004065 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:39.046508 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:39.059127 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:39.225087 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:39.502851 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:39.538662 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:39.547094 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:39.565271 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:39.727106 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:40.019787 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:40.063046 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:40.064602 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:40.225394 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:40.501926 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:40.547846 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:40.560831 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:40.727871 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:41.012666 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:41.054073 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:41.062368 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:41.228065 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:41.512474 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:41.546587 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:41.558432 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:41.725059 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:42.005281 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:42.038630 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:42.046391 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:42.058442 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:42.224719 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:42.502649 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:42.546346 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:42.559530 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:42.725278 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:43.004645 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:43.047827 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:43.059008 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:43.225126 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:43.502083 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:43.550305 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:43.559258 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:43.724670 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:44.006181 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:44.046964 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:44.059283 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:44.223781 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:44.505292 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:44.538905 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:44.546263 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:44.559167 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:44.724859 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:45.033757 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:45.051207 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:45.067066 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:45.226297 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:45.501905 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:45.545882 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:45.558631 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:45.727459 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:46.002534 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:46.045950 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:46.058853 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:46.225933 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:46.502566 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:46.545253 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:46.558741 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:46.724077 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:47.003328 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:47.045879 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:47.046736 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:47.058168 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:47.224612 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:47.502687 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:47.545849 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:47.558350 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:47.724543 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:48.009413 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:48.047094 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:48.059589 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:48.224311 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:48.502073 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:48.545716 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:48.558520 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:48.724856 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:49.004770 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:49.047063 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:49.058907 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:49.225196 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:49.502842 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:49.537492 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:49.547088 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:49.559091 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:49.723818 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:50.019853 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:50.047070 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:50.059382 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:50.224894 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:50.502505 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:50.546198 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:50.558970 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:50.723919 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:51.004809 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:51.045880 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:51.059593 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:51.224886 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:51.502627 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:51.537881 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:51.546229 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:51.561072 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:51.723815 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:52.010244 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:52.045650 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:52.058551 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:52.223887 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:52.502426 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:52.545364 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:52.562721 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:52.724488 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:53.004785 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:53.046260 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:53.058924 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:53.223736 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:53.501996 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:53.546246 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:53.559083 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:53.724534 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:54.008865 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:54.040964 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:54.047183 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:54.060008 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:54.224572 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:54.501950 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:54.545916 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:54.558575 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:54.724096 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:55.011728 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:55.047172 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:55.059746 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:55.226396 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:55.502899 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:55.545485 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:55.558972 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:55.724107 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:56.004828 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:56.046445 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:56.058091 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:56.223560 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:56.503410 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:56.538853 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:56.546210 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:56.558840 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:56.723817 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:57.004716 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:57.046345 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:57.059234 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:57.224064 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:57.502277 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:57.546307 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:57.559137 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:57.723938 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:58.003389 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:58.047318 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:58.059482 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:58.225163 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:58.501977 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:58.546363 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:58.559008 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:58.723999 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:59.005404 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:59.037944 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:34:59.046636 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:59.058029 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:59.225876 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:34:59.502709 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:34:59.545846 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:34:59.558424 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:34:59.724136 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:00.021640 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:00.113518 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:00.128767 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:00.241017 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:00.504352 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:00.546191 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:00.560181 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:00.723938 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:01.004540 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:01.038474 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:35:01.046581 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:01.058323 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:01.224953 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:01.503528 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:01.548212 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:01.562618 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:01.727758 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:02.004117 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:02.047304 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:02.059009 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:02.226327 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:02.502428 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:02.545974 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:02.562815 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:02.725861 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:03.010296 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:03.046392 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:03.059223 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:03.224260 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:03.502813 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:03.538790 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:35:03.545899 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:03.559424 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:03.724688 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:04.020750 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:04.047099 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:04.059399 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:04.226264 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:04.503207 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:04.547347 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:04.559203 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:04.733223 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:05.008217 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:05.047263 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:05.060444 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:05.225441 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:05.502471 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:05.539034 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:35:05.546764 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:05.559521 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:05.728983 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:06.003835 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:06.049169 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:06.062068 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:06.225633 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:06.502565 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:06.547426 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:06.558723 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:06.725301 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:07.013357 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:07.048089 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:07.059927 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:07.224782 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:07.502886 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:07.546525 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:07.558318 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:07.724542 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:08.009609 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:08.039027 3256235 pod_ready.go:102] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"False"
	I0328 03:35:08.046914 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:08.060973 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:08.225121 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:08.504521 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:08.545841 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:08.559089 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:08.725545 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:09.003864 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:09.046383 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:09.059568 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:09.225272 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:09.503080 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:09.547216 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:09.561770 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:09.724573 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:10.026659 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:10.044065 3256235 pod_ready.go:92] pod "coredns-76f75df574-4t7xb" in "kube-system" namespace has status "Ready":"True"
	I0328 03:35:10.044095 3256235 pod_ready.go:81] duration metric: took 42.013170326s for pod "coredns-76f75df574-4t7xb" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.044107 3256235 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-d54jf" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.048370 3256235 pod_ready.go:97] error getting pod "coredns-76f75df574-d54jf" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-d54jf" not found
	I0328 03:35:10.048413 3256235 pod_ready.go:81] duration metric: took 4.268362ms for pod "coredns-76f75df574-d54jf" in "kube-system" namespace to be "Ready" ...
	E0328 03:35:10.048427 3256235 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-d54jf" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-d54jf" not found
	I0328 03:35:10.048461 3256235 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.051836 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:10.057110 3256235 pod_ready.go:92] pod "etcd-addons-340351" in "kube-system" namespace has status "Ready":"True"
	I0328 03:35:10.057133 3256235 pod_ready.go:81] duration metric: took 8.658483ms for pod "etcd-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.057181 3256235 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.061294 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:10.066906 3256235 pod_ready.go:92] pod "kube-apiserver-addons-340351" in "kube-system" namespace has status "Ready":"True"
	I0328 03:35:10.066929 3256235 pod_ready.go:81] duration metric: took 9.715553ms for pod "kube-apiserver-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.066942 3256235 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.080659 3256235 pod_ready.go:92] pod "kube-controller-manager-addons-340351" in "kube-system" namespace has status "Ready":"True"
	I0328 03:35:10.080695 3256235 pod_ready.go:81] duration metric: took 13.744233ms for pod "kube-controller-manager-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.080708 3256235 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-29lc9" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.224828 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:10.235436 3256235 pod_ready.go:92] pod "kube-proxy-29lc9" in "kube-system" namespace has status "Ready":"True"
	I0328 03:35:10.235468 3256235 pod_ready.go:81] duration metric: took 154.752868ms for pod "kube-proxy-29lc9" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.235480 3256235 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.502471 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:10.550785 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:10.564207 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:10.635433 3256235 pod_ready.go:92] pod "kube-scheduler-addons-340351" in "kube-system" namespace has status "Ready":"True"
	I0328 03:35:10.635460 3256235 pod_ready.go:81] duration metric: took 399.971514ms for pod "kube-scheduler-addons-340351" in "kube-system" namespace to be "Ready" ...
	I0328 03:35:10.635471 3256235 pod_ready.go:38] duration metric: took 42.626531442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 03:35:10.635486 3256235 api_server.go:52] waiting for apiserver process to appear ...
	I0328 03:35:10.635550 3256235 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 03:35:10.654512 3256235 api_server.go:72] duration metric: took 43.978337442s to wait for apiserver process to appear ...
	I0328 03:35:10.654545 3256235 api_server.go:88] waiting for apiserver healthz status ...
	I0328 03:35:10.654566 3256235 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0328 03:35:10.662733 3256235 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0328 03:35:10.664038 3256235 api_server.go:141] control plane version: v1.29.3
	I0328 03:35:10.664063 3256235 api_server.go:131] duration metric: took 9.510882ms to wait for apiserver health ...
	I0328 03:35:10.664072 3256235 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 03:35:10.724381 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:10.842105 3256235 system_pods.go:59] 18 kube-system pods found
	I0328 03:35:10.842142 3256235 system_pods.go:61] "coredns-76f75df574-4t7xb" [5af40e4a-d195-4c14-85cb-5de85be714fa] Running
	I0328 03:35:10.842152 3256235 system_pods.go:61] "csi-hostpath-attacher-0" [fcc8acbf-a1b1-4585-9ad0-d490f65f1171] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0328 03:35:10.842176 3256235 system_pods.go:61] "csi-hostpath-resizer-0" [f455f10e-f271-4bba-8b18-8ced67632a6d] Running
	I0328 03:35:10.842199 3256235 system_pods.go:61] "csi-hostpathplugin-pjsbd" [6d3f46d2-ad55-4e5b-88be-12e3ca376390] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0328 03:35:10.842214 3256235 system_pods.go:61] "etcd-addons-340351" [6c2b4711-647d-46dd-9b0d-9cc19c44f521] Running
	I0328 03:35:10.842219 3256235 system_pods.go:61] "kindnet-67627" [8ba8509a-1bee-481d-ab65-7aa3b7161a46] Running
	I0328 03:35:10.842227 3256235 system_pods.go:61] "kube-apiserver-addons-340351" [46c6af0e-cfde-4823-b0fa-9b90caa3ca7b] Running
	I0328 03:35:10.842231 3256235 system_pods.go:61] "kube-controller-manager-addons-340351" [a9bc9778-74a8-47eb-a1cc-773b8d33b514] Running
	I0328 03:35:10.842241 3256235 system_pods.go:61] "kube-ingress-dns-minikube" [f35953a6-96f4-48f7-a782-631feac05115] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0328 03:35:10.842255 3256235 system_pods.go:61] "kube-proxy-29lc9" [d8072468-8899-4ea7-a9f1-c8be947568f4] Running
	I0328 03:35:10.842260 3256235 system_pods.go:61] "kube-scheduler-addons-340351" [7b32c249-e353-4f2d-8444-a5215aa66c54] Running
	I0328 03:35:10.842266 3256235 system_pods.go:61] "metrics-server-69cf46c98-87zwk" [912dbcd4-98b5-4145-a0ad-4cfa8d5f457c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 03:35:10.842274 3256235 system_pods.go:61] "nvidia-device-plugin-daemonset-24zx7" [87d15db8-a090-4212-9d30-443f2319b151] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0328 03:35:10.842280 3256235 system_pods.go:61] "registry-l2d8j" [efbdf6d1-f769-43b5-92a9-b4b43129bbc9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0328 03:35:10.842291 3256235 system_pods.go:61] "registry-proxy-qdjhx" [48e81d37-f08c-4677-a66e-2dc91903192d] Running
	I0328 03:35:10.842298 3256235 system_pods.go:61] "snapshot-controller-58dbcc7b99-c7fr6" [cd05cbb1-c073-40de-8a48-8ec98af0c76a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0328 03:35:10.842303 3256235 system_pods.go:61] "snapshot-controller-58dbcc7b99-w6n4b" [a37ef814-9ca2-4124-9732-97d999919dd4] Running
	I0328 03:35:10.842313 3256235 system_pods.go:61] "storage-provisioner" [28a444ff-2b92-49df-ba5a-adf2135bd722] Running
	I0328 03:35:10.842321 3256235 system_pods.go:74] duration metric: took 178.24206ms to wait for pod list to return data ...
	I0328 03:35:10.842330 3256235 default_sa.go:34] waiting for default service account to be created ...
	I0328 03:35:11.006441 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:11.034730 3256235 default_sa.go:45] found service account: "default"
	I0328 03:35:11.034756 3256235 default_sa.go:55] duration metric: took 192.415744ms for default service account to be created ...
	I0328 03:35:11.034766 3256235 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 03:35:11.048374 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:11.059991 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:11.224770 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:11.241540 3256235 system_pods.go:86] 18 kube-system pods found
	I0328 03:35:11.241575 3256235 system_pods.go:89] "coredns-76f75df574-4t7xb" [5af40e4a-d195-4c14-85cb-5de85be714fa] Running
	I0328 03:35:11.241585 3256235 system_pods.go:89] "csi-hostpath-attacher-0" [fcc8acbf-a1b1-4585-9ad0-d490f65f1171] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0328 03:35:11.241591 3256235 system_pods.go:89] "csi-hostpath-resizer-0" [f455f10e-f271-4bba-8b18-8ced67632a6d] Running
	I0328 03:35:11.241599 3256235 system_pods.go:89] "csi-hostpathplugin-pjsbd" [6d3f46d2-ad55-4e5b-88be-12e3ca376390] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0328 03:35:11.241605 3256235 system_pods.go:89] "etcd-addons-340351" [6c2b4711-647d-46dd-9b0d-9cc19c44f521] Running
	I0328 03:35:11.241609 3256235 system_pods.go:89] "kindnet-67627" [8ba8509a-1bee-481d-ab65-7aa3b7161a46] Running
	I0328 03:35:11.241614 3256235 system_pods.go:89] "kube-apiserver-addons-340351" [46c6af0e-cfde-4823-b0fa-9b90caa3ca7b] Running
	I0328 03:35:11.241618 3256235 system_pods.go:89] "kube-controller-manager-addons-340351" [a9bc9778-74a8-47eb-a1cc-773b8d33b514] Running
	I0328 03:35:11.241626 3256235 system_pods.go:89] "kube-ingress-dns-minikube" [f35953a6-96f4-48f7-a782-631feac05115] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0328 03:35:11.241631 3256235 system_pods.go:89] "kube-proxy-29lc9" [d8072468-8899-4ea7-a9f1-c8be947568f4] Running
	I0328 03:35:11.241645 3256235 system_pods.go:89] "kube-scheduler-addons-340351" [7b32c249-e353-4f2d-8444-a5215aa66c54] Running
	I0328 03:35:11.241651 3256235 system_pods.go:89] "metrics-server-69cf46c98-87zwk" [912dbcd4-98b5-4145-a0ad-4cfa8d5f457c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 03:35:11.241660 3256235 system_pods.go:89] "nvidia-device-plugin-daemonset-24zx7" [87d15db8-a090-4212-9d30-443f2319b151] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0328 03:35:11.241675 3256235 system_pods.go:89] "registry-l2d8j" [efbdf6d1-f769-43b5-92a9-b4b43129bbc9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0328 03:35:11.241680 3256235 system_pods.go:89] "registry-proxy-qdjhx" [48e81d37-f08c-4677-a66e-2dc91903192d] Running
	I0328 03:35:11.241687 3256235 system_pods.go:89] "snapshot-controller-58dbcc7b99-c7fr6" [cd05cbb1-c073-40de-8a48-8ec98af0c76a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0328 03:35:11.241701 3256235 system_pods.go:89] "snapshot-controller-58dbcc7b99-w6n4b" [a37ef814-9ca2-4124-9732-97d999919dd4] Running
	I0328 03:35:11.241705 3256235 system_pods.go:89] "storage-provisioner" [28a444ff-2b92-49df-ba5a-adf2135bd722] Running
	I0328 03:35:11.241713 3256235 system_pods.go:126] duration metric: took 206.940197ms to wait for k8s-apps to be running ...
	I0328 03:35:11.241721 3256235 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 03:35:11.241783 3256235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 03:35:11.255008 3256235 system_svc.go:56] duration metric: took 13.277112ms WaitForService to wait for kubelet
	I0328 03:35:11.255036 3256235 kubeadm.go:576] duration metric: took 44.578865931s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 03:35:11.255056 3256235 node_conditions.go:102] verifying NodePressure condition ...
	I0328 03:35:11.435640 3256235 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0328 03:35:11.435675 3256235 node_conditions.go:123] node cpu capacity is 2
	I0328 03:35:11.435688 3256235 node_conditions.go:105] duration metric: took 180.626912ms to run NodePressure ...
	I0328 03:35:11.435722 3256235 start.go:240] waiting for startup goroutines ...
	I0328 03:35:11.502660 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:11.547936 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:11.559239 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:11.726197 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:12.002666 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:12.048212 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:12.059884 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:12.227903 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:12.503090 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:12.562187 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:12.570710 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:12.727568 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:13.002664 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:13.048470 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:13.094501 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:13.227084 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:13.503672 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:13.550674 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:13.563609 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:13.726443 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:14.003320 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:14.046574 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:14.059325 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:14.225045 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:14.502527 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:14.547163 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:14.570572 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:14.724956 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:15.002177 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:15.046440 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:15.059203 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:15.225256 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:15.502262 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:15.554779 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:15.566317 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:15.730292 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:16.004015 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:16.046721 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:16.060524 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:16.227167 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:16.502183 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:16.546222 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:16.558822 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:16.725046 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:17.003491 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:17.046635 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:17.058252 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:17.223983 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:17.505984 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:17.546716 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:17.559355 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:17.725329 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:18.012151 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:18.046244 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:18.060023 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:18.226011 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:18.502985 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:18.546515 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:18.559103 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:18.725462 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:19.004560 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:19.045959 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:19.059107 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:19.227406 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:19.502166 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:19.545640 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:19.558482 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:19.725118 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:20.003823 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:20.046721 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:20.061356 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:20.224401 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:20.502018 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:20.546673 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:20.558985 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:20.727706 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:21.005781 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:21.053795 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:21.059203 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:21.224230 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:21.503228 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:21.547172 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:21.559919 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:21.724275 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:22.011168 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:22.045932 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:22.058920 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:22.224367 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:22.502771 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:22.546166 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:22.559358 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:22.725266 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:23.005939 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:23.046759 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:23.059448 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:23.224782 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:23.502569 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:23.546490 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:23.558038 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:23.728098 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:24.005967 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:24.051732 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:24.058372 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:24.230541 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:24.502596 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:24.546295 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:24.560014 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:24.727789 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:25.003904 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:25.047151 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:25.059279 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:25.224632 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:25.507450 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 03:35:25.551296 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:25.565613 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:25.727734 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:26.003525 3256235 kapi.go:107] duration metric: took 49.005258624s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0328 03:35:26.006309 3256235 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-340351 cluster.
	I0328 03:35:26.008622 3256235 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0328 03:35:26.010634 3256235 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0328 03:35:26.047471 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:26.058572 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:26.225622 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:26.549552 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:26.559581 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 03:35:26.724746 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:27.047538 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:27.059469 3256235 kapi.go:107] duration metric: took 52.506038387s to wait for kubernetes.io/minikube-addons=registry ...
	I0328 03:35:27.232002 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:27.547392 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:27.724761 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:28.047132 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:28.225696 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:28.546683 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:28.724831 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:29.045542 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:29.226063 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:29.545923 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:29.725108 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:30.051994 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:30.227295 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:30.546308 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:30.723407 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:31.046808 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:31.224934 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:31.545951 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:31.725584 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:32.046984 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:32.228560 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:32.546138 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:32.725202 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:33.046260 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:33.225712 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:33.545779 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:33.724396 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:34.045653 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:34.225849 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:34.545420 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:34.724199 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:35.046564 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:35.224440 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:35.545803 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:35.725549 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:36.046399 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:36.224477 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:36.547154 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:36.725233 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:37.051797 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:37.224470 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:37.546399 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:37.723866 3256235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 03:35:38.046275 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:38.225624 3256235 kapi.go:107] duration metric: took 1m2.51465373s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0328 03:35:38.547175 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:39.047101 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:39.545917 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:40.048121 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:40.546171 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:41.047123 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:41.546285 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:42.046255 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:42.554823 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:43.045215 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:43.545654 3256235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 03:35:44.051773 3256235 kapi.go:107] duration metric: took 1m9.510582976s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0328 03:35:44.056456 3256235 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, nvidia-device-plugin, default-storageclass, inspektor-gadget, metrics-server, yakd, volumesnapshots, gcp-auth, registry, csi-hostpath-driver, ingress
	I0328 03:35:44.059076 3256235 addons.go:505] duration metric: took 1m17.382551328s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner nvidia-device-plugin default-storageclass inspektor-gadget metrics-server yakd volumesnapshots gcp-auth registry csi-hostpath-driver ingress]
	I0328 03:35:44.059142 3256235 start.go:245] waiting for cluster config update ...
	I0328 03:35:44.059163 3256235 start.go:254] writing updated cluster config ...
	I0328 03:35:44.059500 3256235 ssh_runner.go:195] Run: rm -f paused
	I0328 03:35:44.408626 3256235 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 03:35:44.411068 3256235 out.go:177] * Done! kubectl is now configured to use "addons-340351" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8c2516b0e3a30       46bd05c4a04f3       1 second ago         Exited              busybox                   0                   96ea2ab61abf9       test-local-path
	0af14b89de3ef       dd1b12fcb6097       1 second ago         Exited              hello-world-app           3                   7f3190163f6d5       hello-world-app-5d77478584-gzhzb
	9097224d295fe       fc9db2894f4e4       6 seconds ago        Exited              helper-pod                0                   064289b484554       helper-pod-create-pvc-909410b4-b229-4cbd-b1e7-84a050013630
	8424d7ac30795       b8c82647e8a25       49 seconds ago       Running             nginx                     0                   7bea4ddaaa56f       nginx
	e15a95ffb70b4       7ce2150c8929b       About a minute ago   Running             local-path-provisioner    0                   98a1709d49969       local-path-provisioner-78b46b4d5c-78cz5
	c037b7b5ccb3a       20e3f2db01e81       About a minute ago   Running             yakd                      0                   9bb2de1289b0e       yakd-dashboard-9947fc6bf-mhzbx
	c2b9ca89f109e       6ef582f3ec844       About a minute ago   Running             gcp-auth                  0                   8c4d3f5f59a90       gcp-auth-7d69788767-drfvk
	40f862debf865       6727f8bc3105d       About a minute ago   Running             cloud-spanner-emulator    0                   07464a7bfdb5f       cloud-spanner-emulator-5446596998-79w62
	458d4a3adf46d       1a024e390dd05       About a minute ago   Exited              patch                     2                   69d2456192928       ingress-nginx-admission-patch-9qxlj
	8d1c112873817       2437cf7621777       About a minute ago   Running             coredns                   0                   59d25e2c99736       coredns-76f75df574-4t7xb
	c6edd75053d61       1a024e390dd05       2 minutes ago        Exited              create                    0                   383e76396edb1       ingress-nginx-admission-create-lcmws
	9388e1fc0827b       ba04bb24b9575       2 minutes ago        Running             storage-provisioner       0                   bb330c58b668d       storage-provisioner
	33103b22f5e2e       4740c1948d3fc       2 minutes ago        Running             kindnet-cni               0                   86fe81901b85b       kindnet-67627
	0b803a1e6aaec       0e9b4a0d1e86d       2 minutes ago        Running             kube-proxy                0                   3b34d3414c3e4       kube-proxy-29lc9
	ba25d3bfe399f       4b51f9f6bc9b9       3 minutes ago        Running             kube-scheduler            0                   1c13a8367c3a4       kube-scheduler-addons-340351
	20b33063be704       014faa467e297       3 minutes ago        Running             etcd                      0                   42aeddac88c83       etcd-addons-340351
	e0f96c58a12b1       121d70d9a3805       3 minutes ago        Running             kube-controller-manager   0                   8d4a0e561b27e       kube-controller-manager-addons-340351
	d594d62b87501       2581114f5709d       3 minutes ago        Running             kube-apiserver            0                   ac8ed49880f7e       kube-apiserver-addons-340351
	
	
	==> containerd <==
	Mar 28 03:37:06 addons-340351 containerd[761]: time="2024-03-28T03:37:06.554971860Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 28 03:37:06 addons-340351 containerd[761]: time="2024-03-28T03:37:06.719028419Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 28 03:37:06 addons-340351 containerd[761]: time="2024-03-28T03:37:06.996918155Z" level=info msg="CreateContainer within sandbox \"7f3190163f6d5f75977dc9ba13e6f9824e3bb7060d1b4cbd84d8c80f9884c014\" for container &ContainerMetadata{Name:hello-world-app,Attempt:3,}"
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.021351740Z" level=info msg="CreateContainer within sandbox \"7f3190163f6d5f75977dc9ba13e6f9824e3bb7060d1b4cbd84d8c80f9884c014\" for &ContainerMetadata{Name:hello-world-app,Attempt:3,} returns container id \"0af14b89de3ef72ce50854e429a26a7448b7e716ee5347cf7988cb9e015b82ed\""
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.022402008Z" level=info msg="StartContainer for \"0af14b89de3ef72ce50854e429a26a7448b7e716ee5347cf7988cb9e015b82ed\""
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.073135897Z" level=info msg="StartContainer for \"0af14b89de3ef72ce50854e429a26a7448b7e716ee5347cf7988cb9e015b82ed\" returns successfully"
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.102445013Z" level=info msg="shim disconnected" id=0af14b89de3ef72ce50854e429a26a7448b7e716ee5347cf7988cb9e015b82ed
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.102501651Z" level=warning msg="cleaning up after shim disconnected" id=0af14b89de3ef72ce50854e429a26a7448b7e716ee5347cf7988cb9e015b82ed namespace=k8s.io
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.102512137Z" level=info msg="cleaning up dead shim"
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.112695493Z" level=warning msg="cleanup warnings time=\"2024-03-28T03:37:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11026 runtime=io.containerd.runc.v2\n"
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.284837332Z" level=info msg="RemoveContainer for \"c525c9eedb856d071a68efb34433043ab120f1621d5423ad40c2e8f043a2da80\""
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.293974308Z" level=info msg="RemoveContainer for \"c525c9eedb856d071a68efb34433043ab120f1621d5423ad40c2e8f043a2da80\" returns successfully"
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.381201261Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/busybox:stable,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.387174502Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:46bd05c4a04f3d121198e054da02daed22d0f561764acb0f0594066d5972619b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.391989645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/busybox:stable,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.396746387Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/busybox@sha256:650fd573e056b679a5110a70aabeb01e26b76e545ec4b9c70a9523f2dfaf18c6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.398132637Z" level=info msg="PullImage \"busybox:stable\" returns image reference \"sha256:46bd05c4a04f3d121198e054da02daed22d0f561764acb0f0594066d5972619b\""
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.413932895Z" level=info msg="CreateContainer within sandbox \"96ea2ab61abf9019318f6774f1174cd94aa8a16e65282deadee80f660693b6cf\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.431519392Z" level=info msg="CreateContainer within sandbox \"96ea2ab61abf9019318f6774f1174cd94aa8a16e65282deadee80f660693b6cf\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"8c2516b0e3a303672c290a8c014caabdd5c12fcce3a08854b65a79771150433d\""
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.434734711Z" level=info msg="StartContainer for \"8c2516b0e3a303672c290a8c014caabdd5c12fcce3a08854b65a79771150433d\""
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.507936832Z" level=info msg="StartContainer for \"8c2516b0e3a303672c290a8c014caabdd5c12fcce3a08854b65a79771150433d\" returns successfully"
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.546931988Z" level=info msg="shim disconnected" id=8c2516b0e3a303672c290a8c014caabdd5c12fcce3a08854b65a79771150433d
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.546992532Z" level=warning msg="cleaning up after shim disconnected" id=8c2516b0e3a303672c290a8c014caabdd5c12fcce3a08854b65a79771150433d namespace=k8s.io
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.547006037Z" level=info msg="cleaning up dead shim"
	Mar 28 03:37:07 addons-340351 containerd[761]: time="2024-03-28T03:37:07.554763942Z" level=warning msg="cleanup warnings time=\"2024-03-28T03:37:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11103 runtime=io.containerd.runc.v2\n"
	
	
	==> coredns [8d1c112873817d2b1615acd08b3a26b2a7436958f846ea54704dc771aac6e24e] <==
	[INFO] 10.244.0.20:37995 - 24958 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034058s
	[INFO] 10.244.0.20:39566 - 122 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053767s
	[INFO] 10.244.0.20:58497 - 360 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031688s
	[INFO] 10.244.0.20:48196 - 37035 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026568s
	[INFO] 10.244.0.20:48196 - 743 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001549692s
	[INFO] 10.244.0.20:39566 - 56174 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059002s
	[INFO] 10.244.0.20:58497 - 3953 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002073444s
	[INFO] 10.244.0.20:48196 - 21582 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000883265s
	[INFO] 10.244.0.20:39566 - 10768 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059477s
	[INFO] 10.244.0.20:48196 - 10754 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057885s
	[INFO] 10.244.0.20:58497 - 29508 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000861039s
	[INFO] 10.244.0.20:39566 - 32604 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038514s
	[INFO] 10.244.0.20:39566 - 22284 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046703s
	[INFO] 10.244.0.20:58497 - 57720 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000030489s
	[INFO] 10.244.0.20:39566 - 8334 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069758s
	[INFO] 10.244.0.20:39566 - 28348 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001149243s
	[INFO] 10.244.0.20:39566 - 48992 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000994672s
	[INFO] 10.244.0.20:50134 - 1846 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046424s
	[INFO] 10.244.0.20:37995 - 21533 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001717483s
	[INFO] 10.244.0.20:39566 - 39567 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000102857s
	[INFO] 10.244.0.20:50134 - 52174 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002117742s
	[INFO] 10.244.0.20:37995 - 48382 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.005684437s
	[INFO] 10.244.0.20:50134 - 11654 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.005069348s
	[INFO] 10.244.0.20:50134 - 471 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057402s
	[INFO] 10.244.0.20:37995 - 41038 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00002724s
	
	
	==> describe nodes <==
	Name:               addons-340351
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-340351
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=addons-340351
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T03_34_14_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-340351
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 03:34:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-340351
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 03:37:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 03:36:47 +0000   Thu, 28 Mar 2024 03:34:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 03:36:47 +0000   Thu, 28 Mar 2024 03:34:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 03:36:47 +0000   Thu, 28 Mar 2024 03:34:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 03:36:47 +0000   Thu, 28 Mar 2024 03:34:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-340351
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022568Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022568Ki
	  pods:               110
	System Info:
	  Machine ID:                 c929d13281ab461f9ae34957a1d9a2b2
	  System UUID:                68cfd376-65a2-46b7-bd23-4c7605fa936a
	  Boot ID:                    6d3ffb57-9092-48f6-a12c-685c1918590f
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5446596998-79w62    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  default                     hello-world-app-5d77478584-gzhzb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  default                     test-local-path                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  gcp-auth                    gcp-auth-7d69788767-drfvk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  kube-system                 coredns-76f75df574-4t7xb                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m43s
	  kube-system                 etcd-addons-340351                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m55s
	  kube-system                 kindnet-67627                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m43s
	  kube-system                 kube-apiserver-addons-340351               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 kube-controller-manager-addons-340351      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 kube-proxy-29lc9                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 kube-scheduler-addons-340351               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  local-path-storage          local-path-provisioner-78b46b4d5c-78cz5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-mhzbx             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m41s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m4s (x8 over 3m4s)  kubelet          Node addons-340351 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x8 over 3m4s)  kubelet          Node addons-340351 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x7 over 3m4s)  kubelet          Node addons-340351 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m55s                kubelet          Node addons-340351 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m55s                kubelet          Node addons-340351 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m55s                kubelet          Node addons-340351 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m55s                kubelet          Node addons-340351 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m55s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m55s                kubelet          Node addons-340351 status is now: NodeReady
	  Normal  RegisteredNode           2m43s                node-controller  Node addons-340351 event: Registered Node addons-340351 in Controller
	
	
	==> dmesg <==
	[  +0.001027] FS-Cache: O-key=[8] '6ae0c90000000000'
	[  +0.000690] FS-Cache: N-cookie c=0000009c [p=00000093 fl=2 nc=0 na=1]
	[  +0.000921] FS-Cache: N-cookie d=000000007dd5da0e{9p.inode} n=0000000088d31df7
	[  +0.001082] FS-Cache: N-key=[8] '6ae0c90000000000'
	[  +0.002528] FS-Cache: Duplicate cookie detected
	[  +0.000747] FS-Cache: O-cookie c=00000096 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001280] FS-Cache: O-cookie d=000000007dd5da0e{9p.inode} n=0000000032bd8fff
	[  +0.001075] FS-Cache: O-key=[8] '6ae0c90000000000'
	[  +0.000794] FS-Cache: N-cookie c=0000009d [p=00000093 fl=2 nc=0 na=1]
	[  +0.000920] FS-Cache: N-cookie d=000000007dd5da0e{9p.inode} n=000000007eb28088
	[  +0.001194] FS-Cache: N-key=[8] '6ae0c90000000000'
	[  +2.528101] FS-Cache: Duplicate cookie detected
	[  +0.000686] FS-Cache: O-cookie c=00000094 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001234] FS-Cache: O-cookie d=000000007dd5da0e{9p.inode} n=00000000a2709a24
	[  +0.001172] FS-Cache: O-key=[8] '69e0c90000000000'
	[  +0.000678] FS-Cache: N-cookie c=0000009f [p=00000093 fl=2 nc=0 na=1]
	[  +0.000905] FS-Cache: N-cookie d=000000007dd5da0e{9p.inode} n=0000000088d31df7
	[  +0.001002] FS-Cache: N-key=[8] '69e0c90000000000'
	[  +0.307773] FS-Cache: Duplicate cookie detected
	[  +0.000920] FS-Cache: O-cookie c=00000099 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001024] FS-Cache: O-cookie d=000000007dd5da0e{9p.inode} n=00000000b2051977
	[  +0.001032] FS-Cache: O-key=[8] '6fe0c90000000000'
	[  +0.000724] FS-Cache: N-cookie c=000000a0 [p=00000093 fl=2 nc=0 na=1]
	[  +0.000925] FS-Cache: N-cookie d=000000007dd5da0e{9p.inode} n=00000000670d971e
	[  +0.001075] FS-Cache: N-key=[8] '6fe0c90000000000'
	
	
	==> etcd [20b33063be704cb6eec5d3b9ce758f0e449df1c4d71e973f82bcc91b96466b84] <==
	{"level":"info","ts":"2024-03-28T03:34:06.545302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-28T03:34:06.552916Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-28T03:34:06.597386Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-28T03:34:06.602302Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-28T03:34:06.60461Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-28T03:34:06.620509Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-28T03:34:06.620479Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-28T03:34:06.793239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-28T03:34:06.793465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-28T03:34:06.793602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-28T03:34:06.793688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-28T03:34:06.793783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-28T03:34:06.793867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-28T03:34:06.793954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-28T03:34:06.803327Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T03:34:06.808512Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-340351 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T03:34:06.808733Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T03:34:06.80919Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T03:34:06.810942Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-28T03:34:06.820505Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T03:34:06.820676Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-28T03:34:06.822262Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-03-28T03:34:06.84073Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T03:34:06.841023Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T03:34:06.841164Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [c2b9ca89f109eecbba6a26d6ff5a32983f58993d8a660e8408bfa7d08cf71c82] <==
	2024/03/28 03:35:25 GCP Auth Webhook started!
	2024/03/28 03:35:55 Ready to marshal response ...
	2024/03/28 03:35:55 Ready to write response ...
	2024/03/28 03:36:18 Ready to marshal response ...
	2024/03/28 03:36:18 Ready to write response ...
	2024/03/28 03:36:18 Ready to marshal response ...
	2024/03/28 03:36:18 Ready to write response ...
	2024/03/28 03:36:27 Ready to marshal response ...
	2024/03/28 03:36:27 Ready to write response ...
	2024/03/28 03:36:39 Ready to marshal response ...
	2024/03/28 03:36:39 Ready to write response ...
	2024/03/28 03:37:01 Ready to marshal response ...
	2024/03/28 03:37:01 Ready to write response ...
	2024/03/28 03:37:01 Ready to marshal response ...
	2024/03/28 03:37:01 Ready to write response ...
	
	
	==> kernel <==
	 03:37:09 up 11:19,  0 users,  load average: 2.88, 3.75, 3.42
	Linux addons-340351 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [33103b22f5e2e6087942fc63016d85dfd0e2c61c6a1709289b68fee655322d1c] <==
	I0328 03:35:08.227139       1 main.go:227] handling current node
	I0328 03:35:18.241097       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:35:18.241124       1 main.go:227] handling current node
	I0328 03:35:28.253809       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:35:28.253834       1 main.go:227] handling current node
	I0328 03:35:38.258021       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:35:38.258062       1 main.go:227] handling current node
	I0328 03:35:48.270509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:35:48.270547       1 main.go:227] handling current node
	I0328 03:35:58.274943       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:35:58.274975       1 main.go:227] handling current node
	I0328 03:36:08.284518       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:36:08.284554       1 main.go:227] handling current node
	I0328 03:36:18.296545       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:36:18.296817       1 main.go:227] handling current node
	I0328 03:36:28.331388       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:36:28.331417       1 main.go:227] handling current node
	I0328 03:36:38.343680       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:36:38.343710       1 main.go:227] handling current node
	I0328 03:36:48.357027       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:36:48.357057       1 main.go:227] handling current node
	I0328 03:36:58.361123       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:36:58.361153       1 main.go:227] handling current node
	I0328 03:37:08.373647       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 03:37:08.373672       1 main.go:227] handling current node
	
	
	==> kube-apiserver [d594d62b875012a62e08e3f1844dd8ef7059b086bc7cec4b2ff305cc89ec476c] <==
	W0328 03:35:22.763072       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 03:35:22.763140       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0328 03:35:22.764143       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.123.168:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.123.168:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.123.168:443: connect: connection refused
	E0328 03:35:22.765367       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.123.168:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.123.168:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.123.168:443: connect: connection refused
	E0328 03:35:22.770232       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.123.168:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.123.168:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.123.168:443: connect: connection refused
	I0328 03:35:22.879830       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0328 03:36:12.358899       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0328 03:36:13.392416       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0328 03:36:17.937285       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0328 03:36:18.276072       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.1.148"}
	I0328 03:36:23.778396       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0328 03:36:27.078877       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0328 03:36:27.970648       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.118.125"}
	I0328 03:36:55.270077       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0328 03:36:55.270128       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0328 03:36:55.286400       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0328 03:36:55.286446       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0328 03:36:55.350843       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0328 03:36:55.351097       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0328 03:36:55.414933       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0328 03:36:55.415481       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0328 03:36:56.354334       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0328 03:36:56.415971       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0328 03:36:56.445652       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [e0f96c58a12b1920bb47425b1f4980aef254a7e7c11b731f70007f9b4d8391a6] <==
	I0328 03:36:56.581239       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 03:36:56.581276       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 03:36:57.050376       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 03:36:57.050421       1 shared_informer.go:318] Caches are synced for garbage collector
	W0328 03:36:57.787426       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 03:36:57.787461       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0328 03:36:57.863337       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 03:36:57.863373       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0328 03:36:57.907495       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 03:36:57.907531       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0328 03:36:59.757093       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 03:36:59.757277       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0328 03:37:00.019851       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 03:37:00.020110       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0328 03:37:00.513923       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 03:37:00.513959       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0328 03:37:00.934603       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0328 03:37:01.125146       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0328 03:37:04.592200       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 03:37:04.592242       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0328 03:37:04.711696       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 03:37:04.711733       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0328 03:37:05.414493       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 03:37:05.414537       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0328 03:37:07.288222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="73.672µs"
	
	
	==> kube-proxy [0b803a1e6aaec0ee43426f9b9d1d0e0424055aaa20012a6a57ab961a25c387a3] <==
	I0328 03:34:27.982153       1 server_others.go:72] "Using iptables proxy"
	I0328 03:34:28.041502       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0328 03:34:28.126596       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0328 03:34:28.126627       1 server_others.go:168] "Using iptables Proxier"
	I0328 03:34:28.137110       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0328 03:34:28.137133       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0328 03:34:28.137165       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 03:34:28.137363       1 server.go:865] "Version info" version="v1.29.3"
	I0328 03:34:28.137374       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 03:34:28.148305       1 config.go:188] "Starting service config controller"
	I0328 03:34:28.148347       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 03:34:28.148381       1 config.go:97] "Starting endpoint slice config controller"
	I0328 03:34:28.148386       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 03:34:28.148732       1 config.go:315] "Starting node config controller"
	I0328 03:34:28.148739       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 03:34:28.248918       1 shared_informer.go:318] Caches are synced for node config
	I0328 03:34:28.248965       1 shared_informer.go:318] Caches are synced for service config
	I0328 03:34:28.249017       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ba25d3bfe399f4cc75bc23cd361e00c423b4b8b267b8b047f72f4bcb09894c1d] <==
	W0328 03:34:10.797436       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 03:34:10.797455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0328 03:34:10.797559       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 03:34:10.797577       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 03:34:10.797654       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 03:34:10.797671       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 03:34:10.797753       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 03:34:10.797769       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 03:34:10.797847       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0328 03:34:10.797884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0328 03:34:10.797978       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0328 03:34:10.797995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0328 03:34:10.798081       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0328 03:34:10.798142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0328 03:34:10.798237       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 03:34:10.798257       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0328 03:34:10.798323       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 03:34:10.798340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0328 03:34:11.759447       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 03:34:11.759670       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 03:34:11.780942       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0328 03:34:11.781165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 03:34:11.841524       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 03:34:11.841565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 03:34:12.176556       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 03:37:05 addons-340351 kubelet[1490]: I0328 03:37:05.257460    1490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="064289b4845548f7f7c764d6975bebcb65d32afd93a4b42e832f6a47dbe3bcc1"
	Mar 28 03:37:05 addons-340351 kubelet[1490]: I0328 03:37:05.993185    1490 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="989fd85a-1798-4bd3-b0ac-a27d34c5bbbc" path="/var/lib/kubelet/pods/989fd85a-1798-4bd3-b0ac-a27d34c5bbbc/volumes"
	Mar 28 03:37:06 addons-340351 kubelet[1490]: I0328 03:37:06.136631    1490 topology_manager.go:215] "Topology Admit Handler" podUID="35e5557e-29b4-494a-b290-633e883d508c" podNamespace="default" podName="test-local-path"
	Mar 28 03:37:06 addons-340351 kubelet[1490]: E0328 03:37:06.136861    1490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f35953a6-96f4-48f7-a782-631feac05115" containerName="minikube-ingress-dns"
	Mar 28 03:37:06 addons-340351 kubelet[1490]: E0328 03:37:06.136943    1490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87d15db8-a090-4212-9d30-443f2319b151" containerName="nvidia-device-plugin-ctr"
	Mar 28 03:37:06 addons-340351 kubelet[1490]: E0328 03:37:06.137022    1490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="989fd85a-1798-4bd3-b0ac-a27d34c5bbbc" containerName="helper-pod"
	Mar 28 03:37:06 addons-340351 kubelet[1490]: I0328 03:37:06.137119    1490 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d15db8-a090-4212-9d30-443f2319b151" containerName="nvidia-device-plugin-ctr"
	Mar 28 03:37:06 addons-340351 kubelet[1490]: I0328 03:37:06.137186    1490 memory_manager.go:354] "RemoveStaleState removing state" podUID="989fd85a-1798-4bd3-b0ac-a27d34c5bbbc" containerName="helper-pod"
	Mar 28 03:37:06 addons-340351 kubelet[1490]: I0328 03:37:06.137245    1490 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35953a6-96f4-48f7-a782-631feac05115" containerName="minikube-ingress-dns"
	Mar 28 03:37:06 addons-340351 kubelet[1490]: I0328 03:37:06.257662    1490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/35e5557e-29b4-494a-b290-633e883d508c-gcp-creds\") pod \"test-local-path\" (UID: \"35e5557e-29b4-494a-b290-633e883d508c\") " pod="default/test-local-path"
	Mar 28 03:37:06 addons-340351 kubelet[1490]: I0328 03:37:06.257738    1490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxbh\" (UniqueName: \"kubernetes.io/projected/35e5557e-29b4-494a-b290-633e883d508c-kube-api-access-xlxbh\") pod \"test-local-path\" (UID: \"35e5557e-29b4-494a-b290-633e883d508c\") " pod="default/test-local-path"
	Mar 28 03:37:06 addons-340351 kubelet[1490]: I0328 03:37:06.257778    1490 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-909410b4-b229-4cbd-b1e7-84a050013630\" (UniqueName: \"kubernetes.io/host-path/35e5557e-29b4-494a-b290-633e883d508c-pvc-909410b4-b229-4cbd-b1e7-84a050013630\") pod \"test-local-path\" (UID: \"35e5557e-29b4-494a-b290-633e883d508c\") " pod="default/test-local-path"
	Mar 28 03:37:06 addons-340351 kubelet[1490]: I0328 03:37:06.983992    1490 scope.go:117] "RemoveContainer" containerID="c525c9eedb856d071a68efb34433043ab120f1621d5423ad40c2e8f043a2da80"
	Mar 28 03:37:07 addons-340351 kubelet[1490]: I0328 03:37:07.267548    1490 scope.go:117] "RemoveContainer" containerID="c525c9eedb856d071a68efb34433043ab120f1621d5423ad40c2e8f043a2da80"
	Mar 28 03:37:07 addons-340351 kubelet[1490]: I0328 03:37:07.268015    1490 scope.go:117] "RemoveContainer" containerID="0af14b89de3ef72ce50854e429a26a7448b7e716ee5347cf7988cb9e015b82ed"
	Mar 28 03:37:07 addons-340351 kubelet[1490]: E0328 03:37:07.268281    1490 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 40s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-gzhzb_default(055dad01-4b42-41b7-bc01-a0f308e1631e)\"" pod="default/hello-world-app-5d77478584-gzhzb" podUID="055dad01-4b42-41b7-bc01-a0f308e1631e"
	Mar 28 03:37:09 addons-340351 kubelet[1490]: I0328 03:37:09.482290    1490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlxbh\" (UniqueName: \"kubernetes.io/projected/35e5557e-29b4-494a-b290-633e883d508c-kube-api-access-xlxbh\") pod \"35e5557e-29b4-494a-b290-633e883d508c\" (UID: \"35e5557e-29b4-494a-b290-633e883d508c\") "
	Mar 28 03:37:09 addons-340351 kubelet[1490]: I0328 03:37:09.482343    1490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/35e5557e-29b4-494a-b290-633e883d508c-gcp-creds\") pod \"35e5557e-29b4-494a-b290-633e883d508c\" (UID: \"35e5557e-29b4-494a-b290-633e883d508c\") "
	Mar 28 03:37:09 addons-340351 kubelet[1490]: I0328 03:37:09.482374    1490 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/35e5557e-29b4-494a-b290-633e883d508c-pvc-909410b4-b229-4cbd-b1e7-84a050013630\") pod \"35e5557e-29b4-494a-b290-633e883d508c\" (UID: \"35e5557e-29b4-494a-b290-633e883d508c\") "
	Mar 28 03:37:09 addons-340351 kubelet[1490]: I0328 03:37:09.482463    1490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35e5557e-29b4-494a-b290-633e883d508c-pvc-909410b4-b229-4cbd-b1e7-84a050013630" (OuterVolumeSpecName: "data") pod "35e5557e-29b4-494a-b290-633e883d508c" (UID: "35e5557e-29b4-494a-b290-633e883d508c"). InnerVolumeSpecName "pvc-909410b4-b229-4cbd-b1e7-84a050013630". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 28 03:37:09 addons-340351 kubelet[1490]: I0328 03:37:09.482494    1490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35e5557e-29b4-494a-b290-633e883d508c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "35e5557e-29b4-494a-b290-633e883d508c" (UID: "35e5557e-29b4-494a-b290-633e883d508c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 28 03:37:09 addons-340351 kubelet[1490]: I0328 03:37:09.487311    1490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e5557e-29b4-494a-b290-633e883d508c-kube-api-access-xlxbh" (OuterVolumeSpecName: "kube-api-access-xlxbh") pod "35e5557e-29b4-494a-b290-633e883d508c" (UID: "35e5557e-29b4-494a-b290-633e883d508c"). InnerVolumeSpecName "kube-api-access-xlxbh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 28 03:37:09 addons-340351 kubelet[1490]: I0328 03:37:09.583449    1490 reconciler_common.go:300] "Volume detached for volume \"pvc-909410b4-b229-4cbd-b1e7-84a050013630\" (UniqueName: \"kubernetes.io/host-path/35e5557e-29b4-494a-b290-633e883d508c-pvc-909410b4-b229-4cbd-b1e7-84a050013630\") on node \"addons-340351\" DevicePath \"\""
	Mar 28 03:37:09 addons-340351 kubelet[1490]: I0328 03:37:09.583499    1490 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xlxbh\" (UniqueName: \"kubernetes.io/projected/35e5557e-29b4-494a-b290-633e883d508c-kube-api-access-xlxbh\") on node \"addons-340351\" DevicePath \"\""
	Mar 28 03:37:09 addons-340351 kubelet[1490]: I0328 03:37:09.583514    1490 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/35e5557e-29b4-494a-b290-633e883d508c-gcp-creds\") on node \"addons-340351\" DevicePath \"\""
	
	
	==> storage-provisioner [9388e1fc0827b2ec62acd1adb3f7ebe22c40ec106491f57e26c0ed8790b641bc] <==
	I0328 03:34:33.276912       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0328 03:34:33.311892       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0328 03:34:33.311941       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0328 03:34:33.360020       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0328 03:34:33.360198       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-340351_ccb5f49e-f2ee-4fa3-ac31-591d491ebb18!
	I0328 03:34:33.364822       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8b01c804-6b13-4481-a11f-8e2ea3705bd3", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-340351_ccb5f49e-f2ee-4fa3-ac31-591d491ebb18 became leader
	I0328 03:34:33.460812       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-340351_ccb5f49e-f2ee-4fa3-ac31-591d491ebb18!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-340351 -n addons-340351
helpers_test.go:261: (dbg) Run:  kubectl --context addons-340351 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: test-local-path
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CloudSpanner]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-340351 describe pod test-local-path
helpers_test.go:282: (dbg) kubectl --context addons-340351 describe pod test-local-path:

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-340351/192.168.49.2
	Start Time:       Thu, 28 Mar 2024 03:37:06 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  containerd://8c2516b0e3a303672c290a8c014caabdd5c12fcce3a08854b65a79771150433d
	    Image:         busybox:stable
	    Image ID:      docker.io/library/busybox@sha256:650fd573e056b679a5110a70aabeb01e26b76e545ec4b9c70a9523f2dfaf18c6
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 28 Mar 2024 03:37:07 +0000
	      Finished:     Thu, 28 Mar 2024 03:37:07 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xlxbh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-xlxbh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  4s    default-scheduler  Successfully assigned default/test-local-path to addons-340351
	  Normal  Pulling    4s    kubelet            Pulling image "busybox:stable"
	  Normal  Pulled     3s    kubelet            Successfully pulled image "busybox:stable" in 852ms (852ms including waiting)
	  Normal  Created    3s    kubelet            Created container busybox
	  Normal  Started    3s    kubelet            Started container busybox

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/CloudSpanner FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CloudSpanner (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image load --daemon gcr.io/google-containers/addon-resizer:functional-376731 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-376731 image load --daemon gcr.io/google-containers/addon-resizer:functional-376731 --alsologtostderr: (4.108538841s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-376731" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image load --daemon gcr.io/google-containers/addon-resizer:functional-376731 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-376731 image load --daemon gcr.io/google-containers/addon-resizer:functional-376731 --alsologtostderr: (3.349719736s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-376731" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.639380628s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-376731
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image load --daemon gcr.io/google-containers/addon-resizer:functional-376731 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-376731 image load --daemon gcr.io/google-containers/addon-resizer:functional-376731 --alsologtostderr: (3.137223967s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-376731" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image save gcr.io/google-containers/addon-resizer:functional-376731 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0328 03:42:35.346422 3290023 out.go:291] Setting OutFile to fd 1 ...
	I0328 03:42:35.347518 3290023 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:42:35.347534 3290023 out.go:304] Setting ErrFile to fd 2...
	I0328 03:42:35.347539 3290023 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:42:35.347852 3290023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
	I0328 03:42:35.348604 3290023 config.go:182] Loaded profile config "functional-376731": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 03:42:35.348779 3290023 config.go:182] Loaded profile config "functional-376731": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 03:42:35.349337 3290023 cli_runner.go:164] Run: docker container inspect functional-376731 --format={{.State.Status}}
	I0328 03:42:35.365381 3290023 ssh_runner.go:195] Run: systemctl --version
	I0328 03:42:35.365483 3290023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-376731
	I0328 03:42:35.380680 3290023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36244 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/functional-376731/id_rsa Username:docker}
	I0328 03:42:35.472776 3290023 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0328 03:42:35.472844 3290023 cache_images.go:254] Failed to load cached images for profile functional-376731. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0328 03:42:35.472865 3290023 cache_images.go:262] succeeded pushing to: 
	I0328 03:42:35.472870 3290023 cache_images.go:263] failed pushing to: functional-376731

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (381.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-140381 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0328 04:20:44.462885 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 04:21:37.937988 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-140381 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m17.008605933s)

                                                
                                                
-- stdout --
	* [old-k8s-version-140381] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-140381" primary control-plane node in "old-k8s-version-140381" cluster
	* Pulling base image v0.0.43-1711559786-18485 ...
	* Restarting existing docker container for "old-k8s-version-140381" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-140381 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 04:20:42.246229 3452995 out.go:291] Setting OutFile to fd 1 ...
	I0328 04:20:42.246565 3452995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 04:20:42.246616 3452995 out.go:304] Setting ErrFile to fd 2...
	I0328 04:20:42.246641 3452995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 04:20:42.246950 3452995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
	I0328 04:20:42.248454 3452995 out.go:298] Setting JSON to false
	I0328 04:20:42.249589 3452995 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":43380,"bootTime":1711556262,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0328 04:20:42.249709 3452995 start.go:139] virtualization:  
	I0328 04:20:42.253987 3452995 out.go:177] * [old-k8s-version-140381] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 04:20:42.256036 3452995 notify.go:220] Checking for updates...
	I0328 04:20:42.257179 3452995 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 04:20:42.259199 3452995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 04:20:42.261794 3452995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 04:20:42.263516 3452995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	I0328 04:20:42.265833 3452995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 04:20:42.267821 3452995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 04:20:42.270495 3452995 config.go:182] Loaded profile config "old-k8s-version-140381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0328 04:20:42.273309 3452995 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0328 04:20:42.275363 3452995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 04:20:42.295468 3452995 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 04:20:42.295627 3452995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 04:20:42.395611 3452995 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-28 04:20:42.384878223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 04:20:42.395721 3452995 docker.go:295] overlay module found
	I0328 04:20:42.397957 3452995 out.go:177] * Using the docker driver based on existing profile
	I0328 04:20:42.399659 3452995 start.go:297] selected driver: docker
	I0328 04:20:42.399678 3452995 start.go:901] validating driver "docker" against &{Name:old-k8s-version-140381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-140381 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 04:20:42.399831 3452995 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 04:20:42.400507 3452995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 04:20:42.480817 3452995 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-28 04:20:42.468033885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 04:20:42.481200 3452995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 04:20:42.481249 3452995 cni.go:84] Creating CNI manager for ""
	I0328 04:20:42.481266 3452995 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 04:20:42.481304 3452995 start.go:340] cluster config:
	{Name:old-k8s-version-140381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-140381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 04:20:42.483817 3452995 out.go:177] * Starting "old-k8s-version-140381" primary control-plane node in "old-k8s-version-140381" cluster
	I0328 04:20:42.485831 3452995 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0328 04:20:42.487860 3452995 out.go:177] * Pulling base image v0.0.43-1711559786-18485 ...
	I0328 04:20:42.489809 3452995 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0328 04:20:42.489887 3452995 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0328 04:20:42.489903 3452995 cache.go:56] Caching tarball of preloaded images
	I0328 04:20:42.490003 3452995 preload.go:173] Found /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 04:20:42.490018 3452995 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0328 04:20:42.490131 3452995 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/config.json ...
	I0328 04:20:42.490370 3452995 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0328 04:20:42.506999 3452995 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon, skipping pull
	I0328 04:20:42.507018 3452995 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in daemon, skipping load
	I0328 04:20:42.507036 3452995 cache.go:194] Successfully downloaded all kic artifacts
	I0328 04:20:42.507064 3452995 start.go:360] acquireMachinesLock for old-k8s-version-140381: {Name:mkd2a2277eb3c386549587a3609c9e51e5a7f7c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 04:20:42.507129 3452995 start.go:364] duration metric: took 41µs to acquireMachinesLock for "old-k8s-version-140381"
	I0328 04:20:42.507149 3452995 start.go:96] Skipping create...Using existing machine configuration
	I0328 04:20:42.507154 3452995 fix.go:54] fixHost starting: 
	I0328 04:20:42.507430 3452995 cli_runner.go:164] Run: docker container inspect old-k8s-version-140381 --format={{.State.Status}}
	I0328 04:20:42.526332 3452995 fix.go:112] recreateIfNeeded on old-k8s-version-140381: state=Stopped err=<nil>
	W0328 04:20:42.526375 3452995 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 04:20:42.528753 3452995 out.go:177] * Restarting existing docker container for "old-k8s-version-140381" ...
	I0328 04:20:42.531356 3452995 cli_runner.go:164] Run: docker start old-k8s-version-140381
	I0328 04:20:42.801161 3452995 cli_runner.go:164] Run: docker container inspect old-k8s-version-140381 --format={{.State.Status}}
	I0328 04:20:42.822736 3452995 kic.go:430] container "old-k8s-version-140381" state is running.
	I0328 04:20:42.823136 3452995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-140381
	I0328 04:20:42.848772 3452995 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/config.json ...
	I0328 04:20:42.849044 3452995 machine.go:94] provisionDockerMachine start ...
	I0328 04:20:42.849112 3452995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140381
	I0328 04:20:42.873876 3452995 main.go:141] libmachine: Using SSH client type: native
	I0328 04:20:42.874148 3452995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36524 <nil> <nil>}
	I0328 04:20:42.874157 3452995 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 04:20:42.874812 3452995 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0328 04:20:46.017051 3452995 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-140381
	
	I0328 04:20:46.017076 3452995 ubuntu.go:169] provisioning hostname "old-k8s-version-140381"
	I0328 04:20:46.017144 3452995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140381
	I0328 04:20:46.050658 3452995 main.go:141] libmachine: Using SSH client type: native
	I0328 04:20:46.050927 3452995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36524 <nil> <nil>}
	I0328 04:20:46.050939 3452995 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-140381 && echo "old-k8s-version-140381" | sudo tee /etc/hostname
	I0328 04:20:46.202144 3452995 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-140381
	
	I0328 04:20:46.202294 3452995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140381
	I0328 04:20:46.223285 3452995 main.go:141] libmachine: Using SSH client type: native
	I0328 04:20:46.223540 3452995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36524 <nil> <nil>}
	I0328 04:20:46.223557 3452995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-140381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-140381/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-140381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 04:20:46.364488 3452995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 04:20:46.364556 3452995 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18485-3249988/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-3249988/.minikube}
	I0328 04:20:46.364584 3452995 ubuntu.go:177] setting up certificates
	I0328 04:20:46.364594 3452995 provision.go:84] configureAuth start
	I0328 04:20:46.364681 3452995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-140381
	I0328 04:20:46.383342 3452995 provision.go:143] copyHostCerts
	I0328 04:20:46.383411 3452995 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.pem, removing ...
	I0328 04:20:46.383433 3452995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.pem
	I0328 04:20:46.383515 3452995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.pem (1078 bytes)
	I0328 04:20:46.383625 3452995 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-3249988/.minikube/cert.pem, removing ...
	I0328 04:20:46.383637 3452995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-3249988/.minikube/cert.pem
	I0328 04:20:46.383666 3452995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-3249988/.minikube/cert.pem (1123 bytes)
	I0328 04:20:46.383732 3452995 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-3249988/.minikube/key.pem, removing ...
	I0328 04:20:46.383742 3452995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-3249988/.minikube/key.pem
	I0328 04:20:46.383766 3452995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-3249988/.minikube/key.pem (1675 bytes)
	I0328 04:20:46.383828 3452995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-140381 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-140381]
	I0328 04:20:46.983787 3452995 provision.go:177] copyRemoteCerts
	I0328 04:20:46.983868 3452995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 04:20:46.983916 3452995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140381
	I0328 04:20:46.999057 3452995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36524 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/old-k8s-version-140381/id_rsa Username:docker}
	I0328 04:20:47.099047 3452995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 04:20:47.129705 3452995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 04:20:47.160950 3452995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 04:20:47.194513 3452995 provision.go:87] duration metric: took 829.901427ms to configureAuth
	I0328 04:20:47.194602 3452995 ubuntu.go:193] setting minikube options for container-runtime
	I0328 04:20:47.194854 3452995 config.go:182] Loaded profile config "old-k8s-version-140381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0328 04:20:47.194896 3452995 machine.go:97] duration metric: took 4.345842986s to provisionDockerMachine
	I0328 04:20:47.194930 3452995 start.go:293] postStartSetup for "old-k8s-version-140381" (driver="docker")
	I0328 04:20:47.194984 3452995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 04:20:47.195093 3452995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 04:20:47.195162 3452995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140381
	I0328 04:20:47.216236 3452995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36524 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/old-k8s-version-140381/id_rsa Username:docker}
	I0328 04:20:47.318497 3452995 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 04:20:47.322362 3452995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0328 04:20:47.322400 3452995 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0328 04:20:47.322416 3452995 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0328 04:20:47.322423 3452995 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0328 04:20:47.322437 3452995 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-3249988/.minikube/addons for local assets ...
	I0328 04:20:47.322492 3452995 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-3249988/.minikube/files for local assets ...
	I0328 04:20:47.322573 3452995 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-3249988/.minikube/files/etc/ssl/certs/32553982.pem -> 32553982.pem in /etc/ssl/certs
	I0328 04:20:47.322679 3452995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 04:20:47.332143 3452995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/files/etc/ssl/certs/32553982.pem --> /etc/ssl/certs/32553982.pem (1708 bytes)
	I0328 04:20:47.358731 3452995 start.go:296] duration metric: took 163.773521ms for postStartSetup
	I0328 04:20:47.358851 3452995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 04:20:47.358928 3452995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140381
	I0328 04:20:47.377823 3452995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36524 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/old-k8s-version-140381/id_rsa Username:docker}
	I0328 04:20:47.473968 3452995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0328 04:20:47.479078 3452995 fix.go:56] duration metric: took 4.971908929s for fixHost
	I0328 04:20:47.479106 3452995 start.go:83] releasing machines lock for "old-k8s-version-140381", held for 4.971967808s
	I0328 04:20:47.479183 3452995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-140381
	I0328 04:20:47.496235 3452995 ssh_runner.go:195] Run: cat /version.json
	I0328 04:20:47.496312 3452995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140381
	I0328 04:20:47.496521 3452995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 04:20:47.496594 3452995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140381
	I0328 04:20:47.526152 3452995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36524 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/old-k8s-version-140381/id_rsa Username:docker}
	I0328 04:20:47.536081 3452995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36524 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/old-k8s-version-140381/id_rsa Username:docker}
	I0328 04:20:47.767133 3452995 ssh_runner.go:195] Run: systemctl --version
	I0328 04:20:47.773114 3452995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 04:20:47.777809 3452995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0328 04:20:47.795171 3452995 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0328 04:20:47.795327 3452995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 04:20:47.804386 3452995 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0328 04:20:47.804464 3452995 start.go:494] detecting cgroup driver to use...
	I0328 04:20:47.804512 3452995 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0328 04:20:47.804591 3452995 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 04:20:47.819417 3452995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 04:20:47.832043 3452995 docker.go:217] disabling cri-docker service (if available) ...
	I0328 04:20:47.832172 3452995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 04:20:47.846511 3452995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 04:20:47.859321 3452995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 04:20:47.966286 3452995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 04:20:48.083677 3452995 docker.go:233] disabling docker service ...
	I0328 04:20:48.083794 3452995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 04:20:48.098711 3452995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 04:20:48.111664 3452995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 04:20:48.227545 3452995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 04:20:48.349505 3452995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 04:20:48.363227 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 04:20:48.389440 3452995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0328 04:20:48.407314 3452995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 04:20:48.418403 3452995 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 04:20:48.418481 3452995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 04:20:48.432317 3452995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 04:20:48.445867 3452995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 04:20:48.456200 3452995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 04:20:48.466682 3452995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 04:20:48.476222 3452995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 04:20:48.486665 3452995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 04:20:48.496452 3452995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 04:20:48.505643 3452995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 04:20:48.614441 3452995 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 04:20:48.810055 3452995 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0328 04:20:48.810125 3452995 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0328 04:20:48.814082 3452995 start.go:562] Will wait 60s for crictl version
	I0328 04:20:48.814149 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:20:48.822233 3452995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 04:20:48.867559 3452995 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0328 04:20:48.867630 3452995 ssh_runner.go:195] Run: containerd --version
	I0328 04:20:48.889012 3452995 ssh_runner.go:195] Run: containerd --version
	I0328 04:20:48.915197 3452995 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	I0328 04:20:48.917258 3452995 cli_runner.go:164] Run: docker network inspect old-k8s-version-140381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0328 04:20:48.933776 3452995 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0328 04:20:48.937859 3452995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 04:20:48.949391 3452995 kubeadm.go:877] updating cluster {Name:old-k8s-version-140381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-140381 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 04:20:48.949513 3452995 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0328 04:20:48.949584 3452995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 04:20:48.997322 3452995 containerd.go:627] all images are preloaded for containerd runtime.
	I0328 04:20:48.997349 3452995 containerd.go:534] Images already preloaded, skipping extraction
	I0328 04:20:48.997406 3452995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 04:20:49.044949 3452995 containerd.go:627] all images are preloaded for containerd runtime.
	I0328 04:20:49.044973 3452995 cache_images.go:84] Images are preloaded, skipping loading
	I0328 04:20:49.044981 3452995 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0328 04:20:49.045104 3452995 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-140381 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-140381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 04:20:49.045182 3452995 ssh_runner.go:195] Run: sudo crictl info
	I0328 04:20:49.094633 3452995 cni.go:84] Creating CNI manager for ""
	I0328 04:20:49.094660 3452995 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 04:20:49.094671 3452995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 04:20:49.094691 3452995 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-140381 NodeName:old-k8s-version-140381 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0328 04:20:49.094822 3452995 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-140381"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 04:20:49.094899 3452995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0328 04:20:49.105642 3452995 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 04:20:49.105719 3452995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 04:20:49.115439 3452995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0328 04:20:49.135578 3452995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 04:20:49.161018 3452995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0328 04:20:49.181142 3452995 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0328 04:20:49.184786 3452995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 04:20:49.196137 3452995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 04:20:49.312219 3452995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 04:20:49.331498 3452995 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381 for IP: 192.168.85.2
	I0328 04:20:49.331520 3452995 certs.go:194] generating shared ca certs ...
	I0328 04:20:49.331546 3452995 certs.go:226] acquiring lock for ca certs: {Name:mk654727350d982ceeedd640f586ca1658e18559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 04:20:49.331690 3452995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.key
	I0328 04:20:49.331744 3452995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.key
	I0328 04:20:49.331756 3452995 certs.go:256] generating profile certs ...
	I0328 04:20:49.331838 3452995 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.key
	I0328 04:20:49.331916 3452995 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/apiserver.key.83ac214b
	I0328 04:20:49.331959 3452995 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/proxy-client.key
	I0328 04:20:49.332068 3452995 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/3255398.pem (1338 bytes)
	W0328 04:20:49.332107 3452995 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/3255398_empty.pem, impossibly tiny 0 bytes
	I0328 04:20:49.332117 3452995 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca-key.pem (1679 bytes)
	I0328 04:20:49.332146 3452995 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem (1078 bytes)
	I0328 04:20:49.332172 3452995 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/cert.pem (1123 bytes)
	I0328 04:20:49.332197 3452995 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/key.pem (1675 bytes)
	I0328 04:20:49.332242 3452995 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/files/etc/ssl/certs/32553982.pem (1708 bytes)
	I0328 04:20:49.332960 3452995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 04:20:49.426463 3452995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 04:20:49.503534 3452995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 04:20:49.534859 3452995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 04:20:49.560435 3452995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 04:20:49.586099 3452995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 04:20:49.610569 3452995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 04:20:49.634777 3452995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 04:20:49.659200 3452995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 04:20:49.685549 3452995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/3255398.pem --> /usr/share/ca-certificates/3255398.pem (1338 bytes)
	I0328 04:20:49.711698 3452995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/files/etc/ssl/certs/32553982.pem --> /usr/share/ca-certificates/32553982.pem (1708 bytes)
	I0328 04:20:49.737992 3452995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 04:20:49.758677 3452995 ssh_runner.go:195] Run: openssl version
	I0328 04:20:49.764882 3452995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 04:20:49.776204 3452995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 04:20:49.780079 3452995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 28 03:33 /usr/share/ca-certificates/minikubeCA.pem
	I0328 04:20:49.780213 3452995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 04:20:49.787725 3452995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 04:20:49.797339 3452995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3255398.pem && ln -fs /usr/share/ca-certificates/3255398.pem /etc/ssl/certs/3255398.pem"
	I0328 04:20:49.807455 3452995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3255398.pem
	I0328 04:20:49.811935 3452995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 28 03:39 /usr/share/ca-certificates/3255398.pem
	I0328 04:20:49.812073 3452995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3255398.pem
	I0328 04:20:49.819712 3452995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3255398.pem /etc/ssl/certs/51391683.0"
	I0328 04:20:49.829766 3452995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32553982.pem && ln -fs /usr/share/ca-certificates/32553982.pem /etc/ssl/certs/32553982.pem"
	I0328 04:20:49.840216 3452995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32553982.pem
	I0328 04:20:49.844360 3452995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 28 03:39 /usr/share/ca-certificates/32553982.pem
	I0328 04:20:49.844477 3452995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32553982.pem
	I0328 04:20:49.851938 3452995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32553982.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 04:20:49.862185 3452995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 04:20:49.866235 3452995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 04:20:49.873771 3452995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 04:20:49.881089 3452995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 04:20:49.888358 3452995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 04:20:49.895641 3452995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 04:20:49.902938 3452995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 04:20:49.910400 3452995 kubeadm.go:391] StartCluster: {Name:old-k8s-version-140381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-140381 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 04:20:49.910561 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0328 04:20:49.910647 3452995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 04:20:49.959022 3452995 cri.go:89] found id: "af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22"
	I0328 04:20:49.959053 3452995 cri.go:89] found id: "a8a205ed367afc8283dc922919d853b38b07b442559e2d315356929745708407"
	I0328 04:20:49.959059 3452995 cri.go:89] found id: "ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff"
	I0328 04:20:49.959062 3452995 cri.go:89] found id: "4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508"
	I0328 04:20:49.959066 3452995 cri.go:89] found id: "1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f"
	I0328 04:20:49.959087 3452995 cri.go:89] found id: "105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6"
	I0328 04:20:49.959098 3452995 cri.go:89] found id: "1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16"
	I0328 04:20:49.959102 3452995 cri.go:89] found id: "8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd"
	I0328 04:20:49.959105 3452995 cri.go:89] found id: ""
	I0328 04:20:49.959169 3452995 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0328 04:20:49.973537 3452995 cri.go:116] JSON = null
	W0328 04:20:49.973609 3452995 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0328 04:20:49.973699 3452995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 04:20:49.983445 3452995 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 04:20:49.983469 3452995 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 04:20:49.983516 3452995 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 04:20:49.983593 3452995 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 04:20:49.992966 3452995 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 04:20:49.993462 3452995 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-140381" does not appear in /home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 04:20:49.993631 3452995 kubeconfig.go:62] /home/jenkins/minikube-integration/18485-3249988/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-140381" cluster setting kubeconfig missing "old-k8s-version-140381" context setting]
	I0328 04:20:49.993973 3452995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/kubeconfig: {Name:mkf778b20fa7ee9827f7d3539ae3fbccd66af6b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 04:20:49.995516 3452995 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 04:20:50.006461 3452995 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0328 04:20:50.006526 3452995 kubeadm.go:591] duration metric: took 22.99707ms to restartPrimaryControlPlane
	I0328 04:20:50.006543 3452995 kubeadm.go:393] duration metric: took 96.154729ms to StartCluster
	I0328 04:20:50.006598 3452995 settings.go:142] acquiring lock: {Name:mkc9f345268bcac5ebc4aa579f709fe3221112b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 04:20:50.006685 3452995 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 04:20:50.007559 3452995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/kubeconfig: {Name:mkf778b20fa7ee9827f7d3539ae3fbccd66af6b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 04:20:50.007885 3452995 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0328 04:20:50.016803 3452995 out.go:177] * Verifying Kubernetes components...
	I0328 04:20:50.008469 3452995 config.go:182] Loaded profile config "old-k8s-version-140381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0328 04:20:50.008425 3452995 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 04:20:50.020391 3452995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 04:20:50.017003 3452995 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-140381"
	I0328 04:20:50.020647 3452995 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-140381"
	W0328 04:20:50.020671 3452995 addons.go:243] addon storage-provisioner should already be in state true
	I0328 04:20:50.020706 3452995 host.go:66] Checking if "old-k8s-version-140381" exists ...
	I0328 04:20:50.021200 3452995 cli_runner.go:164] Run: docker container inspect old-k8s-version-140381 --format={{.State.Status}}
	I0328 04:20:50.017111 3452995 addons.go:69] Setting dashboard=true in profile "old-k8s-version-140381"
	I0328 04:20:50.021490 3452995 addons.go:234] Setting addon dashboard=true in "old-k8s-version-140381"
	W0328 04:20:50.021505 3452995 addons.go:243] addon dashboard should already be in state true
	I0328 04:20:50.021561 3452995 host.go:66] Checking if "old-k8s-version-140381" exists ...
	I0328 04:20:50.022031 3452995 cli_runner.go:164] Run: docker container inspect old-k8s-version-140381 --format={{.State.Status}}
	I0328 04:20:50.017123 3452995 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-140381"
	I0328 04:20:50.025660 3452995 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-140381"
	I0328 04:20:50.026034 3452995 cli_runner.go:164] Run: docker container inspect old-k8s-version-140381 --format={{.State.Status}}
	I0328 04:20:50.017140 3452995 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-140381"
	I0328 04:20:50.026117 3452995 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-140381"
	W0328 04:20:50.026144 3452995 addons.go:243] addon metrics-server should already be in state true
	I0328 04:20:50.026205 3452995 host.go:66] Checking if "old-k8s-version-140381" exists ...
	I0328 04:20:50.026626 3452995 cli_runner.go:164] Run: docker container inspect old-k8s-version-140381 --format={{.State.Status}}
	I0328 04:20:50.096394 3452995 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 04:20:50.098833 3452995 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 04:20:50.098859 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 04:20:50.098941 3452995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140381
	I0328 04:20:50.109626 3452995 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0328 04:20:50.111962 3452995 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0328 04:20:50.115996 3452995 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0328 04:20:50.116031 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0328 04:20:50.116106 3452995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140381
	I0328 04:20:50.118370 3452995 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 04:20:50.120129 3452995 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 04:20:50.120159 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 04:20:50.120235 3452995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140381
	I0328 04:20:50.109306 3452995 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-140381"
	W0328 04:20:50.122222 3452995 addons.go:243] addon default-storageclass should already be in state true
	I0328 04:20:50.122258 3452995 host.go:66] Checking if "old-k8s-version-140381" exists ...
	I0328 04:20:50.122709 3452995 cli_runner.go:164] Run: docker container inspect old-k8s-version-140381 --format={{.State.Status}}
	I0328 04:20:50.165813 3452995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36524 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/old-k8s-version-140381/id_rsa Username:docker}
	I0328 04:20:50.199213 3452995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36524 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/old-k8s-version-140381/id_rsa Username:docker}
	I0328 04:20:50.199330 3452995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36524 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/old-k8s-version-140381/id_rsa Username:docker}
	I0328 04:20:50.216500 3452995 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 04:20:50.216526 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 04:20:50.216595 3452995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140381
	I0328 04:20:50.249835 3452995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36524 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/old-k8s-version-140381/id_rsa Username:docker}
	I0328 04:20:50.269203 3452995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 04:20:50.322708 3452995 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-140381" to be "Ready" ...
	I0328 04:20:50.372015 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 04:20:50.483087 3452995 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 04:20:50.483119 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 04:20:50.483565 3452995 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0328 04:20:50.483582 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0328 04:20:50.563711 3452995 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 04:20:50.563738 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 04:20:50.571054 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0328 04:20:50.579296 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:50.579329 3452995 retry.go:31] will retry after 302.360384ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:50.581707 3452995 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0328 04:20:50.581747 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0328 04:20:50.616456 3452995 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 04:20:50.616490 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 04:20:50.659643 3452995 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0328 04:20:50.659684 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0328 04:20:50.659829 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 04:20:50.728897 3452995 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0328 04:20:50.728964 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0328 04:20:50.777210 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:50.777239 3452995 retry.go:31] will retry after 368.301413ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:50.795036 3452995 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0328 04:20:50.795060 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0328 04:20:50.825084 3452995 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0328 04:20:50.825151 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0328 04:20:50.873369 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:50.873400 3452995 retry.go:31] will retry after 232.600677ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:50.882556 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 04:20:50.891098 3452995 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0328 04:20:50.891140 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0328 04:20:50.959774 3452995 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0328 04:20:50.959801 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0328 04:20:51.010966 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:51.011000 3452995 retry.go:31] will retry after 486.948125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:51.017348 3452995 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 04:20:51.017373 3452995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0328 04:20:51.040486 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 04:20:51.107130 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 04:20:51.146513 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0328 04:20:51.252113 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:51.252162 3452995 retry.go:31] will retry after 355.784899ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 04:20:51.365557 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:51.365596 3452995 retry.go:31] will retry after 505.996745ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 04:20:51.388008 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:51.388049 3452995 retry.go:31] will retry after 554.668132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:51.498194 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0328 04:20:51.595090 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:51.595196 3452995 retry.go:31] will retry after 727.469559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:51.608504 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0328 04:20:51.707328 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:51.707406 3452995 retry.go:31] will retry after 397.415483ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:51.872217 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 04:20:51.943895 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0328 04:20:51.989442 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:51.989535 3452995 retry.go:31] will retry after 475.038879ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 04:20:52.092390 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:52.092601 3452995 retry.go:31] will retry after 715.255508ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:52.105948 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0328 04:20:52.235541 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:52.235638 3452995 retry.go:31] will retry after 334.144548ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:52.323176 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 04:20:52.323696 3452995 node_ready.go:53] error getting node "old-k8s-version-140381": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-140381": dial tcp 192.168.85.2:8443: connect: connection refused
	W0328 04:20:52.417931 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:52.418013 3452995 retry.go:31] will retry after 876.793642ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:52.465314 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0328 04:20:52.568556 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:52.568639 3452995 retry.go:31] will retry after 1.047407152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:52.571023 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0328 04:20:52.674138 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:52.674219 3452995 retry.go:31] will retry after 932.180937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:52.808451 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0328 04:20:52.895229 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:52.895261 3452995 retry.go:31] will retry after 1.184663746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:53.295795 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0328 04:20:53.422043 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:53.422136 3452995 retry.go:31] will retry after 1.733626552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:53.607628 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 04:20:53.616560 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0328 04:20:53.862320 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:53.862432 3452995 retry.go:31] will retry after 1.726112477s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 04:20:53.870950 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:53.871058 3452995 retry.go:31] will retry after 1.30776225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:54.080224 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0328 04:20:54.201131 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:54.201242 3452995 retry.go:31] will retry after 1.419488006s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:54.323823 3452995 node_ready.go:53] error getting node "old-k8s-version-140381": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-140381": dial tcp 192.168.85.2:8443: connect: connection refused
	I0328 04:20:55.156953 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 04:20:55.179915 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0328 04:20:55.375567 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:55.375612 3452995 retry.go:31] will retry after 2.70711165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 04:20:55.409274 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:55.409323 3452995 retry.go:31] will retry after 1.273878387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:55.589636 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 04:20:55.621060 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0328 04:20:55.790539 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:55.790566 3452995 retry.go:31] will retry after 1.085808331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 04:20:55.843955 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:55.843985 3452995 retry.go:31] will retry after 2.385844582s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:56.323964 3452995 node_ready.go:53] error getting node "old-k8s-version-140381": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-140381": dial tcp 192.168.85.2:8443: connect: connection refused
	I0328 04:20:56.683434 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 04:20:56.877106 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0328 04:20:56.942461 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:56.942511 3452995 retry.go:31] will retry after 2.388758742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 04:20:57.071259 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:57.071296 3452995 retry.go:31] will retry after 4.197118255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:58.083611 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 04:20:58.230024 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0328 04:20:58.316515 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:58.316543 3452995 retry.go:31] will retry after 2.443755901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:58.324078 3452995 node_ready.go:53] error getting node "old-k8s-version-140381": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-140381": dial tcp 192.168.85.2:8443: connect: connection refused
	W0328 04:20:58.469900 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:58.469927 3452995 retry.go:31] will retry after 2.877295756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:59.331455 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0328 04:20:59.860959 3452995 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:20:59.860991 3452995 retry.go:31] will retry after 6.021170575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 04:21:00.760550 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 04:21:01.269460 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 04:21:01.348142 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0328 04:21:05.883102 3452995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 04:21:10.161035 3452995 node_ready.go:49] node "old-k8s-version-140381" has status "Ready":"True"
	I0328 04:21:10.161067 3452995 node_ready.go:38] duration metric: took 19.838325626s for node "old-k8s-version-140381" to be "Ready" ...
	I0328 04:21:10.161079 3452995 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 04:21:10.291128 3452995 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-cbbwd" in "kube-system" namespace to be "Ready" ...
	I0328 04:21:10.488161 3452995 pod_ready.go:92] pod "coredns-74ff55c5b-cbbwd" in "kube-system" namespace has status "Ready":"True"
	I0328 04:21:10.488190 3452995 pod_ready.go:81] duration metric: took 196.975544ms for pod "coredns-74ff55c5b-cbbwd" in "kube-system" namespace to be "Ready" ...
	I0328 04:21:10.488208 3452995 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-140381" in "kube-system" namespace to be "Ready" ...
	I0328 04:21:10.569268 3452995 pod_ready.go:92] pod "etcd-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"True"
	I0328 04:21:10.569340 3452995 pod_ready.go:81] duration metric: took 81.123676ms for pod "etcd-old-k8s-version-140381" in "kube-system" namespace to be "Ready" ...
	I0328 04:21:10.569371 3452995 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-140381" in "kube-system" namespace to be "Ready" ...
	I0328 04:21:10.597964 3452995 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"True"
	I0328 04:21:10.598035 3452995 pod_ready.go:81] duration metric: took 28.642756ms for pod "kube-apiserver-old-k8s-version-140381" in "kube-system" namespace to be "Ready" ...
	I0328 04:21:10.598062 3452995 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-140381" in "kube-system" namespace to be "Ready" ...
	I0328 04:21:11.190921 3452995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.430330344s)
	I0328 04:21:11.702742 3452995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.433232194s)
	I0328 04:21:11.705230 3452995 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-140381 addons enable metrics-server
	
	I0328 04:21:11.703000 3452995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.354829503s)
	I0328 04:21:11.703095 3452995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.819964706s)
	I0328 04:21:11.707430 3452995 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-140381"
	I0328 04:21:11.725914 3452995 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0328 04:21:11.727669 3452995 addons.go:505] duration metric: took 21.719249712s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0328 04:21:12.604238 3452995 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:14.608186 3452995 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"True"
	I0328 04:21:14.608261 3452995 pod_ready.go:81] duration metric: took 4.010177817s for pod "kube-controller-manager-old-k8s-version-140381" in "kube-system" namespace to be "Ready" ...
	I0328 04:21:14.608289 3452995 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qp768" in "kube-system" namespace to be "Ready" ...
	I0328 04:21:16.616025 3452995 pod_ready.go:102] pod "kube-proxy-qp768" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:18.121056 3452995 pod_ready.go:92] pod "kube-proxy-qp768" in "kube-system" namespace has status "Ready":"True"
	I0328 04:21:18.121144 3452995 pod_ready.go:81] duration metric: took 3.512833411s for pod "kube-proxy-qp768" in "kube-system" namespace to be "Ready" ...
	I0328 04:21:18.121180 3452995 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace to be "Ready" ...
	I0328 04:21:20.128498 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:22.627067 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:24.627256 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:27.127286 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:29.631044 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:32.128419 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:34.628809 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:37.128158 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:39.628011 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:41.628099 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:43.628484 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:46.127731 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:48.127802 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:50.183216 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:52.627362 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:54.627448 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:57.127648 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:21:59.628241 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:01.629026 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:04.127859 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:06.128284 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:08.627392 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:10.628622 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:13.127953 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:15.627378 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:17.630749 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:20.127912 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:22.627786 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:25.126904 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:27.127204 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:29.627868 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:32.127553 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:34.128755 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:35.626985 3452995 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"True"
	I0328 04:22:35.627012 3452995 pod_ready.go:81] duration metric: took 1m17.505809442s for pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:35.627025 3452995 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:37.633133 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:39.633943 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:41.634151 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:43.634463 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:46.133199 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:48.134087 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:50.633505 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:53.134317 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:55.633629 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:58.134524 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:00.173150 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:02.634534 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:05.133784 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:07.134026 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:09.135534 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:11.633121 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:14.133586 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:16.133767 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:18.633041 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:20.633744 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:23.133287 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:25.142936 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:27.632977 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:29.633288 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:31.633344 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:33.639199 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:36.133366 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:38.632826 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:40.633035 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:43.134644 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:45.154998 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:47.633745 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:50.133728 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:52.134043 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:54.632779 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:57.133224 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:59.134165 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:01.135735 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:03.633487 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:06.136789 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:08.632796 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:10.633557 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:13.133272 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:15.133545 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:17.633414 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:20.133094 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:22.632566 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:24.633785 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:27.134831 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:29.637612 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:31.653424 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:34.133274 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:36.133875 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:38.134588 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:40.633596 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:43.133594 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:45.135097 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:47.633333 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:50.134044 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:52.633570 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:54.633700 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:57.133327 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:59.633362 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:01.634413 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:04.133368 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:06.134455 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:08.633303 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:10.633495 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:13.133566 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:15.633161 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:17.634906 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:20.133383 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:22.633441 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:25.134614 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:27.135731 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:29.633283 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:31.633405 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:33.633618 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:36.133429 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:38.133583 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:40.134666 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:42.633560 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:44.634790 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:47.133754 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:49.633444 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:51.633901 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:54.133469 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:56.134183 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:58.633378 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:00.633489 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:03.132760 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:05.133238 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:07.133969 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:09.633775 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:11.633937 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:14.133712 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:16.134639 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:18.634868 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:21.134284 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:23.633293 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:25.634070 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:27.634419 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:30.133907 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:32.633477 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:34.633772 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:35.633538 3452995 pod_ready.go:81] duration metric: took 4m0.006497865s for pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace to be "Ready" ...
	E0328 04:26:35.633563 3452995 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0328 04:26:35.633573 3452995 pod_ready.go:38] duration metric: took 5m25.472480946s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 04:26:35.633587 3452995 api_server.go:52] waiting for apiserver process to appear ...
	I0328 04:26:35.633665 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0328 04:26:35.633744 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 04:26:35.694891 3452995 cri.go:89] found id: "5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d"
	I0328 04:26:35.694916 3452995 cri.go:89] found id: "1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16"
	I0328 04:26:35.694920 3452995 cri.go:89] found id: ""
	I0328 04:26:35.694927 3452995 logs.go:276] 2 containers: [5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d 1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16]
	I0328 04:26:35.694991 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.698567 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.701837 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0328 04:26:35.701907 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 04:26:35.743981 3452995 cri.go:89] found id: "332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864"
	I0328 04:26:35.744068 3452995 cri.go:89] found id: "1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f"
	I0328 04:26:35.744088 3452995 cri.go:89] found id: ""
	I0328 04:26:35.744115 3452995 logs.go:276] 2 containers: [332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864 1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f]
	I0328 04:26:35.744225 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.748727 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.752998 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0328 04:26:35.753121 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 04:26:35.796998 3452995 cri.go:89] found id: "8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000"
	I0328 04:26:35.797072 3452995 cri.go:89] found id: "af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22"
	I0328 04:26:35.797083 3452995 cri.go:89] found id: ""
	I0328 04:26:35.797106 3452995 logs.go:276] 2 containers: [8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000 af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22]
	I0328 04:26:35.797166 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.802331 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.805733 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0328 04:26:35.805844 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 04:26:35.845666 3452995 cri.go:89] found id: "42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186"
	I0328 04:26:35.845689 3452995 cri.go:89] found id: "105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6"
	I0328 04:26:35.845694 3452995 cri.go:89] found id: ""
	I0328 04:26:35.845701 3452995 logs.go:276] 2 containers: [42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186 105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6]
	I0328 04:26:35.845758 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.849463 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.852714 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0328 04:26:35.852788 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 04:26:35.898906 3452995 cri.go:89] found id: "99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d"
	I0328 04:26:35.898929 3452995 cri.go:89] found id: "4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508"
	I0328 04:26:35.898934 3452995 cri.go:89] found id: ""
	I0328 04:26:35.898942 3452995 logs.go:276] 2 containers: [99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d 4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508]
	I0328 04:26:35.899000 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.902499 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.905696 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 04:26:35.905765 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 04:26:35.942630 3452995 cri.go:89] found id: "0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0"
	I0328 04:26:35.942654 3452995 cri.go:89] found id: "8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd"
	I0328 04:26:35.942659 3452995 cri.go:89] found id: ""
	I0328 04:26:35.942666 3452995 logs.go:276] 2 containers: [0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0 8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd]
	I0328 04:26:35.942722 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.946636 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.950203 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0328 04:26:35.950337 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 04:26:35.989298 3452995 cri.go:89] found id: "225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b"
	I0328 04:26:35.989320 3452995 cri.go:89] found id: "ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff"
	I0328 04:26:35.989325 3452995 cri.go:89] found id: ""
	I0328 04:26:35.989332 3452995 logs.go:276] 2 containers: [225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff]
	I0328 04:26:35.989409 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.993106 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.997025 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0328 04:26:35.997151 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0328 04:26:36.039193 3452995 cri.go:89] found id: "3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20"
	I0328 04:26:36.039217 3452995 cri.go:89] found id: "b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a"
	I0328 04:26:36.039223 3452995 cri.go:89] found id: ""
	I0328 04:26:36.039230 3452995 logs.go:276] 2 containers: [3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20 b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a]
	I0328 04:26:36.039310 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:36.043435 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:36.047137 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 04:26:36.047238 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 04:26:36.090628 3452995 cri.go:89] found id: "ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0"
	I0328 04:26:36.090697 3452995 cri.go:89] found id: ""
	I0328 04:26:36.090720 3452995 logs.go:276] 1 containers: [ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0]
	I0328 04:26:36.090788 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:36.094637 3452995 logs.go:123] Gathering logs for kindnet [225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b] ...
	I0328 04:26:36.094667 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b"
	I0328 04:26:36.140186 3452995 logs.go:123] Gathering logs for kubernetes-dashboard [ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0] ...
	I0328 04:26:36.140217 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0"
	I0328 04:26:36.190206 3452995 logs.go:123] Gathering logs for dmesg ...
	I0328 04:26:36.190233 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 04:26:36.216448 3452995 logs.go:123] Gathering logs for coredns [8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000] ...
	I0328 04:26:36.216478 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000"
	I0328 04:26:36.256102 3452995 logs.go:123] Gathering logs for kube-scheduler [105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6] ...
	I0328 04:26:36.256180 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6"
	I0328 04:26:36.303265 3452995 logs.go:123] Gathering logs for kube-proxy [4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508] ...
	I0328 04:26:36.303296 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508"
	I0328 04:26:36.343365 3452995 logs.go:123] Gathering logs for kube-controller-manager [8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd] ...
	I0328 04:26:36.343391 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd"
	I0328 04:26:36.427546 3452995 logs.go:123] Gathering logs for storage-provisioner [3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20] ...
	I0328 04:26:36.427584 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20"
	I0328 04:26:36.471558 3452995 logs.go:123] Gathering logs for describe nodes ...
	I0328 04:26:36.471595 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 04:26:36.685179 3452995 logs.go:123] Gathering logs for kube-apiserver [5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d] ...
	I0328 04:26:36.685206 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d"
	I0328 04:26:36.747646 3452995 logs.go:123] Gathering logs for coredns [af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22] ...
	I0328 04:26:36.747680 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22"
	I0328 04:26:36.795667 3452995 logs.go:123] Gathering logs for kube-controller-manager [0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0] ...
	I0328 04:26:36.795697 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0"
	I0328 04:26:36.854031 3452995 logs.go:123] Gathering logs for storage-provisioner [b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a] ...
	I0328 04:26:36.854064 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a"
	I0328 04:26:36.890779 3452995 logs.go:123] Gathering logs for containerd ...
	I0328 04:26:36.890807 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0328 04:26:36.953870 3452995 logs.go:123] Gathering logs for kube-apiserver [1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16] ...
	I0328 04:26:36.953905 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16"
	I0328 04:26:37.016479 3452995 logs.go:123] Gathering logs for etcd [332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864] ...
	I0328 04:26:37.016517 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864"
	I0328 04:26:37.068423 3452995 logs.go:123] Gathering logs for kube-scheduler [42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186] ...
	I0328 04:26:37.068451 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186"
	I0328 04:26:37.110510 3452995 logs.go:123] Gathering logs for kindnet [ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff] ...
	I0328 04:26:37.110539 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff"
	I0328 04:26:37.159372 3452995 logs.go:123] Gathering logs for container status ...
	I0328 04:26:37.159404 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 04:26:37.248698 3452995 logs.go:123] Gathering logs for kubelet ...
	I0328 04:26:37.249484 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 04:26:37.310175 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184708     666 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.310412 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184830     666 reflector.go:138] object-"kube-system"/"kindnet-token-sg2gk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-sg2gk" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.310631 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184889     666 reflector.go:138] object-"kube-system"/"coredns-token-ssc6f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-ssc6f" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.310833 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184963     666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.311233 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185036     666 reflector.go:138] object-"kube-system"/"storage-provisioner-token-vl9zr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-vl9zr" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.311495 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185101     666 reflector.go:138] object-"kube-system"/"kube-proxy-token-2bzlj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2bzlj" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.312109 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185160     666 reflector.go:138] object-"default"/"default-token-pcfrx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pcfrx" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.312415 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185207     666 reflector.go:138] object-"kube-system"/"metrics-server-token-zcjpk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-zcjpk" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.321599 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:12 old-k8s-version-140381 kubelet[666]: E0328 04:21:12.343501     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:37.322000 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:12 old-k8s-version-140381 kubelet[666]: E0328 04:21:12.682344     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.326008 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:27 old-k8s-version-140381 kubelet[666]: E0328 04:21:27.214018     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:37.328090 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:37 old-k8s-version-140381 kubelet[666]: E0328 04:21:37.780553     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.328974 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:38 old-k8s-version-140381 kubelet[666]: E0328 04:21:38.784869     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.329171 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:39 old-k8s-version-140381 kubelet[666]: E0328 04:21:39.202052     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.329501 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:40 old-k8s-version-140381 kubelet[666]: E0328 04:21:40.038234     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.329942 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:42 old-k8s-version-140381 kubelet[666]: E0328 04:21:42.795786     666 pod_workers.go:191] Error syncing pod 12216018-fd85-43f2-8766-9091100b1b60 ("storage-provisioner_kube-system(12216018-fd85-43f2-8766-9091100b1b60)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(12216018-fd85-43f2-8766-9091100b1b60)"
	W0328 04:26:37.332865 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:52 old-k8s-version-140381 kubelet[666]: E0328 04:21:52.216611     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:37.333200 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:52 old-k8s-version-140381 kubelet[666]: E0328 04:21:52.819852     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.333789 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:00 old-k8s-version-140381 kubelet[666]: E0328 04:22:00.038034     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.333973 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:05 old-k8s-version-140381 kubelet[666]: E0328 04:22:05.202143     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.334559 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:13 old-k8s-version-140381 kubelet[666]: E0328 04:22:13.870530     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.334746 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:17 old-k8s-version-140381 kubelet[666]: E0328 04:22:17.239591     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.335074 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:20 old-k8s-version-140381 kubelet[666]: E0328 04:22:20.037953     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.335260 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:30 old-k8s-version-140381 kubelet[666]: E0328 04:22:30.204576     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.335590 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:31 old-k8s-version-140381 kubelet[666]: E0328 04:22:31.201687     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.335920 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:43 old-k8s-version-140381 kubelet[666]: E0328 04:22:43.201963     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.339689 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:44 old-k8s-version-140381 kubelet[666]: E0328 04:22:44.231952     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:37.340293 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:55 old-k8s-version-140381 kubelet[666]: E0328 04:22:55.985948     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.340488 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:57 old-k8s-version-140381 kubelet[666]: E0328 04:22:57.202196     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.340817 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:00 old-k8s-version-140381 kubelet[666]: E0328 04:23:00.039239     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.341007 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:08 old-k8s-version-140381 kubelet[666]: E0328 04:23:08.203565     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.341402 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:14 old-k8s-version-140381 kubelet[666]: E0328 04:23:14.202407     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.341591 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:19 old-k8s-version-140381 kubelet[666]: E0328 04:23:19.201996     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.341922 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:25 old-k8s-version-140381 kubelet[666]: E0328 04:23:25.201722     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.342106 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:33 old-k8s-version-140381 kubelet[666]: E0328 04:23:33.201910     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.342435 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:40 old-k8s-version-140381 kubelet[666]: E0328 04:23:40.205750     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.342620 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:45 old-k8s-version-140381 kubelet[666]: E0328 04:23:45.202153     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.342948 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:51 old-k8s-version-140381 kubelet[666]: E0328 04:23:51.202418     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.343131 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:59 old-k8s-version-140381 kubelet[666]: E0328 04:23:59.202144     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.343460 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:02 old-k8s-version-140381 kubelet[666]: E0328 04:24:02.205580     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.345967 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:13 old-k8s-version-140381 kubelet[666]: E0328 04:24:13.211613     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:37.346305 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:15 old-k8s-version-140381 kubelet[666]: E0328 04:24:15.201704     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.346625 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:26 old-k8s-version-140381 kubelet[666]: E0328 04:24:26.205065     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.347105 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:27 old-k8s-version-140381 kubelet[666]: E0328 04:24:27.190060     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.347477 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:30 old-k8s-version-140381 kubelet[666]: E0328 04:24:30.038061     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.347666 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:41 old-k8s-version-140381 kubelet[666]: E0328 04:24:41.202037     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.347993 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:42 old-k8s-version-140381 kubelet[666]: E0328 04:24:42.202188     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.348180 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:54 old-k8s-version-140381 kubelet[666]: E0328 04:24:54.202037     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.348519 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:56 old-k8s-version-140381 kubelet[666]: E0328 04:24:56.201866     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.348704 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:08 old-k8s-version-140381 kubelet[666]: E0328 04:25:08.203049     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.349034 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:09 old-k8s-version-140381 kubelet[666]: E0328 04:25:09.202098     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.349219 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:20 old-k8s-version-140381 kubelet[666]: E0328 04:25:20.202174     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.349552 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:24 old-k8s-version-140381 kubelet[666]: E0328 04:25:24.201883     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.349736 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:34 old-k8s-version-140381 kubelet[666]: E0328 04:25:34.207786     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.350066 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:37 old-k8s-version-140381 kubelet[666]: E0328 04:25:37.201754     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.350250 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:45 old-k8s-version-140381 kubelet[666]: E0328 04:25:45.204388     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.350577 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:50 old-k8s-version-140381 kubelet[666]: E0328 04:25:50.205165     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.350761 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:58 old-k8s-version-140381 kubelet[666]: E0328 04:25:58.202750     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.351089 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:01 old-k8s-version-140381 kubelet[666]: E0328 04:26:01.201757     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.351274 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:12 old-k8s-version-140381 kubelet[666]: E0328 04:26:12.202992     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.352357 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:14 old-k8s-version-140381 kubelet[666]: E0328 04:26:14.202954     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.352556 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:23 old-k8s-version-140381 kubelet[666]: E0328 04:26:23.202145     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.352892 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:26 old-k8s-version-140381 kubelet[666]: E0328 04:26:26.202120     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	I0328 04:26:37.352907 3452995 logs.go:123] Gathering logs for etcd [1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f] ...
	I0328 04:26:37.352921 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f"
	I0328 04:26:37.405589 3452995 logs.go:123] Gathering logs for kube-proxy [99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d] ...
	I0328 04:26:37.405617 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d"
	I0328 04:26:37.445134 3452995 out.go:304] Setting ErrFile to fd 2...
	I0328 04:26:37.445160 3452995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 04:26:37.445216 3452995 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0328 04:26:37.445227 3452995 out.go:239]   Mar 28 04:26:01 old-k8s-version-140381 kubelet[666]: E0328 04:26:01.201757     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	  Mar 28 04:26:01 old-k8s-version-140381 kubelet[666]: E0328 04:26:01.201757     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.445242 3452995 out.go:239]   Mar 28 04:26:12 old-k8s-version-140381 kubelet[666]: E0328 04:26:12.202992     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 28 04:26:12 old-k8s-version-140381 kubelet[666]: E0328 04:26:12.202992     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.445251 3452995 out.go:239]   Mar 28 04:26:14 old-k8s-version-140381 kubelet[666]: E0328 04:26:14.202954     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	  Mar 28 04:26:14 old-k8s-version-140381 kubelet[666]: E0328 04:26:14.202954     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.445281 3452995 out.go:239]   Mar 28 04:26:23 old-k8s-version-140381 kubelet[666]: E0328 04:26:23.202145     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 28 04:26:23 old-k8s-version-140381 kubelet[666]: E0328 04:26:23.202145     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.445288 3452995 out.go:239]   Mar 28 04:26:26 old-k8s-version-140381 kubelet[666]: E0328 04:26:26.202120     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	  Mar 28 04:26:26 old-k8s-version-140381 kubelet[666]: E0328 04:26:26.202120     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	I0328 04:26:37.445297 3452995 out.go:304] Setting ErrFile to fd 2...
	I0328 04:26:37.445303 3452995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 04:26:47.446294 3452995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 04:26:47.461377 3452995 api_server.go:72] duration metric: took 5m57.453447459s to wait for apiserver process to appear ...
	I0328 04:26:47.461406 3452995 api_server.go:88] waiting for apiserver healthz status ...
	I0328 04:26:47.461449 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0328 04:26:47.461515 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 04:26:47.505663 3452995 cri.go:89] found id: "5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d"
	I0328 04:26:47.505684 3452995 cri.go:89] found id: "1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16"
	I0328 04:26:47.505690 3452995 cri.go:89] found id: ""
	I0328 04:26:47.505697 3452995 logs.go:276] 2 containers: [5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d 1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16]
	I0328 04:26:47.505753 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.509540 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.513884 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0328 04:26:47.513961 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 04:26:47.563757 3452995 cri.go:89] found id: "332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864"
	I0328 04:26:47.563778 3452995 cri.go:89] found id: "1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f"
	I0328 04:26:47.563783 3452995 cri.go:89] found id: ""
	I0328 04:26:47.563790 3452995 logs.go:276] 2 containers: [332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864 1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f]
	I0328 04:26:47.563848 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.567669 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.571457 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0328 04:26:47.571532 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 04:26:47.612387 3452995 cri.go:89] found id: "8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000"
	I0328 04:26:47.612406 3452995 cri.go:89] found id: "af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22"
	I0328 04:26:47.612411 3452995 cri.go:89] found id: ""
	I0328 04:26:47.612418 3452995 logs.go:276] 2 containers: [8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000 af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22]
	I0328 04:26:47.612474 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.616155 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.621215 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0328 04:26:47.621299 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 04:26:47.663527 3452995 cri.go:89] found id: "42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186"
	I0328 04:26:47.663549 3452995 cri.go:89] found id: "105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6"
	I0328 04:26:47.663554 3452995 cri.go:89] found id: ""
	I0328 04:26:47.663561 3452995 logs.go:276] 2 containers: [42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186 105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6]
	I0328 04:26:47.663615 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.667290 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.670596 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0328 04:26:47.670662 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 04:26:47.711008 3452995 cri.go:89] found id: "99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d"
	I0328 04:26:47.711080 3452995 cri.go:89] found id: "4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508"
	I0328 04:26:47.711104 3452995 cri.go:89] found id: ""
	I0328 04:26:47.711132 3452995 logs.go:276] 2 containers: [99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d 4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508]
	I0328 04:26:47.711236 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.715134 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.718983 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 04:26:47.719060 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 04:26:47.756880 3452995 cri.go:89] found id: "0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0"
	I0328 04:26:47.756906 3452995 cri.go:89] found id: "8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd"
	I0328 04:26:47.756911 3452995 cri.go:89] found id: ""
	I0328 04:26:47.756924 3452995 logs.go:276] 2 containers: [0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0 8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd]
	I0328 04:26:47.756990 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.760730 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.764419 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0328 04:26:47.764493 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 04:26:47.803244 3452995 cri.go:89] found id: "225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b"
	I0328 04:26:47.803268 3452995 cri.go:89] found id: "ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff"
	I0328 04:26:47.803273 3452995 cri.go:89] found id: ""
	I0328 04:26:47.803281 3452995 logs.go:276] 2 containers: [225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff]
	I0328 04:26:47.803342 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.806944 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.810244 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0328 04:26:47.810345 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0328 04:26:47.853320 3452995 cri.go:89] found id: "3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20"
	I0328 04:26:47.853343 3452995 cri.go:89] found id: "b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a"
	I0328 04:26:47.853349 3452995 cri.go:89] found id: ""
	I0328 04:26:47.853357 3452995 logs.go:276] 2 containers: [3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20 b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a]
	I0328 04:26:47.853431 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.857639 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.861144 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 04:26:47.861258 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 04:26:47.903777 3452995 cri.go:89] found id: "ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0"
	I0328 04:26:47.903798 3452995 cri.go:89] found id: ""
	I0328 04:26:47.903806 3452995 logs.go:276] 1 containers: [ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0]
	I0328 04:26:47.903873 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.907798 3452995 logs.go:123] Gathering logs for kube-scheduler [105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6] ...
	I0328 04:26:47.907825 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6"
	I0328 04:26:47.946782 3452995 logs.go:123] Gathering logs for kube-controller-manager [8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd] ...
	I0328 04:26:47.946817 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd"
	I0328 04:26:48.012150 3452995 logs.go:123] Gathering logs for kindnet [225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b] ...
	I0328 04:26:48.012190 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b"
	I0328 04:26:48.054567 3452995 logs.go:123] Gathering logs for kubernetes-dashboard [ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0] ...
	I0328 04:26:48.054602 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0"
	I0328 04:26:48.098363 3452995 logs.go:123] Gathering logs for kubelet ...
	I0328 04:26:48.098450 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 04:26:48.155763 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184708     666 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.155994 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184830     666 reflector.go:138] object-"kube-system"/"kindnet-token-sg2gk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-sg2gk" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.156208 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184889     666 reflector.go:138] object-"kube-system"/"coredns-token-ssc6f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-ssc6f" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.156425 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184963     666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.156691 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185036     666 reflector.go:138] object-"kube-system"/"storage-provisioner-token-vl9zr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-vl9zr" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.156912 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185101     666 reflector.go:138] object-"kube-system"/"kube-proxy-token-2bzlj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2bzlj" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.157124 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185160     666 reflector.go:138] object-"default"/"default-token-pcfrx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pcfrx" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.157342 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185207     666 reflector.go:138] object-"kube-system"/"metrics-server-token-zcjpk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-zcjpk" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.164111 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:12 old-k8s-version-140381 kubelet[666]: E0328 04:21:12.343501     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:48.164527 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:12 old-k8s-version-140381 kubelet[666]: E0328 04:21:12.682344     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.168479 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:27 old-k8s-version-140381 kubelet[666]: E0328 04:21:27.214018     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:48.170568 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:37 old-k8s-version-140381 kubelet[666]: E0328 04:21:37.780553     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.171273 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:38 old-k8s-version-140381 kubelet[666]: E0328 04:21:38.784869     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.171456 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:39 old-k8s-version-140381 kubelet[666]: E0328 04:21:39.202052     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.171784 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:40 old-k8s-version-140381 kubelet[666]: E0328 04:21:40.038234     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.172222 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:42 old-k8s-version-140381 kubelet[666]: E0328 04:21:42.795786     666 pod_workers.go:191] Error syncing pod 12216018-fd85-43f2-8766-9091100b1b60 ("storage-provisioner_kube-system(12216018-fd85-43f2-8766-9091100b1b60)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(12216018-fd85-43f2-8766-9091100b1b60)"
	W0328 04:26:48.175098 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:52 old-k8s-version-140381 kubelet[666]: E0328 04:21:52.216611     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:48.175422 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:52 old-k8s-version-140381 kubelet[666]: E0328 04:21:52.819852     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.176007 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:00 old-k8s-version-140381 kubelet[666]: E0328 04:22:00.038034     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.176190 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:05 old-k8s-version-140381 kubelet[666]: E0328 04:22:05.202143     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.176775 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:13 old-k8s-version-140381 kubelet[666]: E0328 04:22:13.870530     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.176976 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:17 old-k8s-version-140381 kubelet[666]: E0328 04:22:17.239591     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.177306 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:20 old-k8s-version-140381 kubelet[666]: E0328 04:22:20.037953     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.177487 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:30 old-k8s-version-140381 kubelet[666]: E0328 04:22:30.204576     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.177811 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:31 old-k8s-version-140381 kubelet[666]: E0328 04:22:31.201687     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.178134 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:43 old-k8s-version-140381 kubelet[666]: E0328 04:22:43.201963     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.180561 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:44 old-k8s-version-140381 kubelet[666]: E0328 04:22:44.231952     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:48.181146 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:55 old-k8s-version-140381 kubelet[666]: E0328 04:22:55.985948     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.181332 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:57 old-k8s-version-140381 kubelet[666]: E0328 04:22:57.202196     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.181655 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:00 old-k8s-version-140381 kubelet[666]: E0328 04:23:00.039239     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.181836 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:08 old-k8s-version-140381 kubelet[666]: E0328 04:23:08.203565     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.182159 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:14 old-k8s-version-140381 kubelet[666]: E0328 04:23:14.202407     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.182339 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:19 old-k8s-version-140381 kubelet[666]: E0328 04:23:19.201996     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.182665 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:25 old-k8s-version-140381 kubelet[666]: E0328 04:23:25.201722     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.182849 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:33 old-k8s-version-140381 kubelet[666]: E0328 04:23:33.201910     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.183173 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:40 old-k8s-version-140381 kubelet[666]: E0328 04:23:40.205750     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.183357 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:45 old-k8s-version-140381 kubelet[666]: E0328 04:23:45.202153     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.183679 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:51 old-k8s-version-140381 kubelet[666]: E0328 04:23:51.202418     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.183861 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:59 old-k8s-version-140381 kubelet[666]: E0328 04:23:59.202144     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.184187 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:02 old-k8s-version-140381 kubelet[666]: E0328 04:24:02.205580     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.186606 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:13 old-k8s-version-140381 kubelet[666]: E0328 04:24:13.211613     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:48.186931 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:15 old-k8s-version-140381 kubelet[666]: E0328 04:24:15.201704     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.187241 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:26 old-k8s-version-140381 kubelet[666]: E0328 04:24:26.205065     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.187693 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:27 old-k8s-version-140381 kubelet[666]: E0328 04:24:27.190060     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.188017 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:30 old-k8s-version-140381 kubelet[666]: E0328 04:24:30.038061     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.188199 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:41 old-k8s-version-140381 kubelet[666]: E0328 04:24:41.202037     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.188547 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:42 old-k8s-version-140381 kubelet[666]: E0328 04:24:42.202188     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.188733 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:54 old-k8s-version-140381 kubelet[666]: E0328 04:24:54.202037     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.189060 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:56 old-k8s-version-140381 kubelet[666]: E0328 04:24:56.201866     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.189245 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:08 old-k8s-version-140381 kubelet[666]: E0328 04:25:08.203049     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.189569 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:09 old-k8s-version-140381 kubelet[666]: E0328 04:25:09.202098     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.189750 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:20 old-k8s-version-140381 kubelet[666]: E0328 04:25:20.202174     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.190073 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:24 old-k8s-version-140381 kubelet[666]: E0328 04:25:24.201883     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.190256 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:34 old-k8s-version-140381 kubelet[666]: E0328 04:25:34.207786     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.190579 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:37 old-k8s-version-140381 kubelet[666]: E0328 04:25:37.201754     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.190761 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:45 old-k8s-version-140381 kubelet[666]: E0328 04:25:45.204388     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.191084 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:50 old-k8s-version-140381 kubelet[666]: E0328 04:25:50.205165     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.192224 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:58 old-k8s-version-140381 kubelet[666]: E0328 04:25:58.202750     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.192568 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:01 old-k8s-version-140381 kubelet[666]: E0328 04:26:01.201757     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.192752 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:12 old-k8s-version-140381 kubelet[666]: E0328 04:26:12.202992     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.193083 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:14 old-k8s-version-140381 kubelet[666]: E0328 04:26:14.202954     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.193265 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:23 old-k8s-version-140381 kubelet[666]: E0328 04:26:23.202145     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.193592 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:26 old-k8s-version-140381 kubelet[666]: E0328 04:26:26.202120     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.193773 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:38 old-k8s-version-140381 kubelet[666]: E0328 04:26:38.206755     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.194095 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:40 old-k8s-version-140381 kubelet[666]: E0328 04:26:40.201832     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	I0328 04:26:48.194105 3452995 logs.go:123] Gathering logs for kube-apiserver [5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d] ...
	I0328 04:26:48.194122 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d"
	I0328 04:26:48.273490 3452995 logs.go:123] Gathering logs for etcd [332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864] ...
	I0328 04:26:48.273526 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864"
	I0328 04:26:48.325499 3452995 logs.go:123] Gathering logs for coredns [8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000] ...
	I0328 04:26:48.325533 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000"
	I0328 04:26:48.383276 3452995 logs.go:123] Gathering logs for kube-proxy [4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508] ...
	I0328 04:26:48.383309 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508"
	I0328 04:26:48.422168 3452995 logs.go:123] Gathering logs for storage-provisioner [b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a] ...
	I0328 04:26:48.422201 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a"
	I0328 04:26:48.469344 3452995 logs.go:123] Gathering logs for container status ...
	I0328 04:26:48.469372 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 04:26:48.519326 3452995 logs.go:123] Gathering logs for dmesg ...
	I0328 04:26:48.519364 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 04:26:48.543760 3452995 logs.go:123] Gathering logs for describe nodes ...
	I0328 04:26:48.543788 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 04:26:48.683490 3452995 logs.go:123] Gathering logs for kube-apiserver [1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16] ...
	I0328 04:26:48.683520 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16"
	I0328 04:26:48.740706 3452995 logs.go:123] Gathering logs for etcd [1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f] ...
	I0328 04:26:48.740740 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f"
	I0328 04:26:48.788535 3452995 logs.go:123] Gathering logs for coredns [af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22] ...
	I0328 04:26:48.788610 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22"
	I0328 04:26:48.831299 3452995 logs.go:123] Gathering logs for kube-scheduler [42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186] ...
	I0328 04:26:48.831380 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186"
	I0328 04:26:48.874554 3452995 logs.go:123] Gathering logs for kube-controller-manager [0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0] ...
	I0328 04:26:48.874580 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0"
	I0328 04:26:48.935711 3452995 logs.go:123] Gathering logs for containerd ...
	I0328 04:26:48.935742 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0328 04:26:48.997937 3452995 logs.go:123] Gathering logs for kube-proxy [99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d] ...
	I0328 04:26:48.997987 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d"
	I0328 04:26:49.040653 3452995 logs.go:123] Gathering logs for kindnet [ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff] ...
	I0328 04:26:49.040682 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff"
	I0328 04:26:49.081516 3452995 logs.go:123] Gathering logs for storage-provisioner [3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20] ...
	I0328 04:26:49.081545 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20"
	I0328 04:26:49.124602 3452995 out.go:304] Setting ErrFile to fd 2...
	I0328 04:26:49.124627 3452995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 04:26:49.124677 3452995 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0328 04:26:49.124693 3452995 out.go:239]   Mar 28 04:26:14 old-k8s-version-140381 kubelet[666]: E0328 04:26:14.202954     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	  Mar 28 04:26:14 old-k8s-version-140381 kubelet[666]: E0328 04:26:14.202954     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:49.124701 3452995 out.go:239]   Mar 28 04:26:23 old-k8s-version-140381 kubelet[666]: E0328 04:26:23.202145     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 28 04:26:23 old-k8s-version-140381 kubelet[666]: E0328 04:26:23.202145     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:49.124710 3452995 out.go:239]   Mar 28 04:26:26 old-k8s-version-140381 kubelet[666]: E0328 04:26:26.202120     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	  Mar 28 04:26:26 old-k8s-version-140381 kubelet[666]: E0328 04:26:26.202120     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:49.124725 3452995 out.go:239]   Mar 28 04:26:38 old-k8s-version-140381 kubelet[666]: E0328 04:26:38.206755     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 28 04:26:38 old-k8s-version-140381 kubelet[666]: E0328 04:26:38.206755     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:49.124732 3452995 out.go:239]   Mar 28 04:26:40 old-k8s-version-140381 kubelet[666]: E0328 04:26:40.201832     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	  Mar 28 04:26:40 old-k8s-version-140381 kubelet[666]: E0328 04:26:40.201832     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	I0328 04:26:49.124742 3452995 out.go:304] Setting ErrFile to fd 2...
	I0328 04:26:49.124750 3452995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 04:26:59.125967 3452995 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0328 04:26:59.138702 3452995 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0328 04:26:59.141044 3452995 out.go:177] 
	W0328 04:26:59.143335 3452995 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0328 04:26:59.143387 3452995 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0328 04:26:59.143408 3452995 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0328 04:26:59.143413 3452995 out.go:239] * 
	* 
	W0328 04:26:59.144298 3452995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 04:26:59.146742 3452995 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-140381 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-140381
helpers_test.go:235: (dbg) docker inspect old-k8s-version-140381:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e3d394c13dbd541ed038b5c5b59d05a0646d1ee746558698a0dbe636841c56b",
	        "Created": "2024-03-28T04:17:54.751263105Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3453245,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-28T04:20:42.79348432Z",
	            "FinishedAt": "2024-03-28T04:20:41.539678353Z"
	        },
	        "Image": "sha256:d0f05b8b802e4c4af20a90d686bad8329f07849a8fda1b1d1c1dc3f527691df0",
	        "ResolvConfPath": "/var/lib/docker/containers/5e3d394c13dbd541ed038b5c5b59d05a0646d1ee746558698a0dbe636841c56b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e3d394c13dbd541ed038b5c5b59d05a0646d1ee746558698a0dbe636841c56b/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e3d394c13dbd541ed038b5c5b59d05a0646d1ee746558698a0dbe636841c56b/hosts",
	        "LogPath": "/var/lib/docker/containers/5e3d394c13dbd541ed038b5c5b59d05a0646d1ee746558698a0dbe636841c56b/5e3d394c13dbd541ed038b5c5b59d05a0646d1ee746558698a0dbe636841c56b-json.log",
	        "Name": "/old-k8s-version-140381",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-140381:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-140381",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/53dce39a97fa5271d4567dd4224d3443be6ccc7eab139f9cdd17cc55468feb42-init/diff:/var/lib/docker/overlay2/30131fd39d8244f5536f8ed96d2d3a8ceec5075331a54f31974379c0fc24022e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/53dce39a97fa5271d4567dd4224d3443be6ccc7eab139f9cdd17cc55468feb42/merged",
	                "UpperDir": "/var/lib/docker/overlay2/53dce39a97fa5271d4567dd4224d3443be6ccc7eab139f9cdd17cc55468feb42/diff",
	                "WorkDir": "/var/lib/docker/overlay2/53dce39a97fa5271d4567dd4224d3443be6ccc7eab139f9cdd17cc55468feb42/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-140381",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-140381/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-140381",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-140381",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-140381",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ef2d7d70ed83758ef467b49c78dbde3e1c142b6818ca4bc0bdfacbb0d4a9bf9a",
	            "SandboxKey": "/var/run/docker/netns/ef2d7d70ed83",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36524"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36523"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36520"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36522"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36521"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-140381": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "d76b3a3f6f67b1d4385b888f4e8c934498c8a42d1c56500cb4a73135f186bab9",
	                    "EndpointID": "2442a818d4a02336a98bb1f47e89bbe60b3209a6ff0a7772473ac0ea2e878a54",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-140381",
	                        "5e3d394c13db"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-140381 -n old-k8s-version-140381
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-140381 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-140381 logs -n 25: (2.607079812s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-406050 sudo find                             | cilium-406050                | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:16 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p cilium-406050 sudo crio                             | cilium-406050                | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:16 UTC |                     |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p cilium-406050                                       | cilium-406050                | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:16 UTC | 28 Mar 24 04:16 UTC |
	| start   | -p force-systemd-env-003239                            | force-systemd-env-003239     | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:16 UTC | 28 Mar 24 04:17 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=5 --driver=docker                                   |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	| ssh     | force-systemd-flag-034881                              | force-systemd-flag-034881    | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:16 UTC | 28 Mar 24 04:16 UTC |
	|         | ssh cat                                                |                              |         |                |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| delete  | -p force-systemd-flag-034881                           | force-systemd-flag-034881    | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:16 UTC | 28 Mar 24 04:16 UTC |
	| start   | -p cert-expiration-834080                              | cert-expiration-834080       | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:16 UTC | 28 Mar 24 04:17 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=3m                                   |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	| ssh     | force-systemd-env-003239                               | force-systemd-env-003239     | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:17 UTC | 28 Mar 24 04:17 UTC |
	|         | ssh cat                                                |                              |         |                |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| delete  | -p force-systemd-env-003239                            | force-systemd-env-003239     | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:17 UTC | 28 Mar 24 04:17 UTC |
	| start   | -p cert-options-503492                                 | cert-options-503492          | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:17 UTC | 28 Mar 24 04:17 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                              |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                              |         |                |                     |                     |
	|         | --apiserver-names=localhost                            |                              |         |                |                     |                     |
	|         | --apiserver-names=www.google.com                       |                              |         |                |                     |                     |
	|         | --apiserver-port=8555                                  |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	| ssh     | cert-options-503492 ssh                                | cert-options-503492          | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:17 UTC | 28 Mar 24 04:17 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |                |                     |                     |
	| ssh     | -p cert-options-503492 -- sudo                         | cert-options-503492          | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:17 UTC | 28 Mar 24 04:17 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |                |                     |                     |
	| delete  | -p cert-options-503492                                 | cert-options-503492          | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:17 UTC | 28 Mar 24 04:17 UTC |
	| start   | -p old-k8s-version-140381                              | old-k8s-version-140381       | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:17 UTC | 28 Mar 24 04:20 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| start   | -p cert-expiration-834080                              | cert-expiration-834080       | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:20 UTC | 28 Mar 24 04:20 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-834080                              | cert-expiration-834080       | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:20 UTC | 28 Mar 24 04:20 UTC |
	| addons  | enable metrics-server -p old-k8s-version-140381        | old-k8s-version-140381       | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:20 UTC | 28 Mar 24 04:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697565 | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:20 UTC | 28 Mar 24 04:22 UTC |
	|         | default-k8s-diff-port-697565                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-140381                              | old-k8s-version-140381       | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:20 UTC | 28 Mar 24 04:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-140381             | old-k8s-version-140381       | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:20 UTC | 28 Mar 24 04:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-140381                              | old-k8s-version-140381       | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-697565  | default-k8s-diff-port-697565 | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:22 UTC | 28 Mar 24 04:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-697565 | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:22 UTC | 28 Mar 24 04:22 UTC |
	|         | default-k8s-diff-port-697565                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-697565       | default-k8s-diff-port-697565 | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:22 UTC | 28 Mar 24 04:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-697565 | jenkins | v1.33.0-beta.0 | 28 Mar 24 04:22 UTC |                     |
	|         | default-k8s-diff-port-697565                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 04:22:29
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 04:22:29.794093 3458290 out.go:291] Setting OutFile to fd 1 ...
	I0328 04:22:29.794311 3458290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 04:22:29.794334 3458290 out.go:304] Setting ErrFile to fd 2...
	I0328 04:22:29.794341 3458290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 04:22:29.794602 3458290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
	I0328 04:22:29.795183 3458290 out.go:298] Setting JSON to false
	I0328 04:22:29.796411 3458290 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":43488,"bootTime":1711556262,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0328 04:22:29.796496 3458290 start.go:139] virtualization:  
	I0328 04:22:29.799635 3458290 out.go:177] * [default-k8s-diff-port-697565] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 04:22:29.804492 3458290 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 04:22:29.806866 3458290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 04:22:29.804625 3458290 notify.go:220] Checking for updates...
	I0328 04:22:29.813438 3458290 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 04:22:29.818087 3458290 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	I0328 04:22:29.823373 3458290 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 04:22:29.826770 3458290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 04:22:29.830501 3458290 config.go:182] Loaded profile config "default-k8s-diff-port-697565": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 04:22:29.831022 3458290 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 04:22:29.857629 3458290 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 04:22:29.857761 3458290 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 04:22:29.927252 3458290 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-28 04:22:29.912111076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 04:22:29.927368 3458290 docker.go:295] overlay module found
	I0328 04:22:29.930616 3458290 out.go:177] * Using the docker driver based on existing profile
	I0328 04:22:29.932830 3458290 start.go:297] selected driver: docker
	I0328 04:22:29.932852 3458290 start.go:901] validating driver "docker" against &{Name:default-k8s-diff-port-697565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-697565 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 04:22:29.932971 3458290 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 04:22:29.933666 3458290 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 04:22:29.992071 3458290 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-28 04:22:29.983396744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 04:22:29.992516 3458290 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 04:22:29.992579 3458290 cni.go:84] Creating CNI manager for ""
	I0328 04:22:29.992593 3458290 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 04:22:29.992635 3458290 start.go:340] cluster config:
	{Name:default-k8s-diff-port-697565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-697565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 04:22:29.995324 3458290 out.go:177] * Starting "default-k8s-diff-port-697565" primary control-plane node in "default-k8s-diff-port-697565" cluster
	I0328 04:22:29.997219 3458290 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0328 04:22:29.999276 3458290 out.go:177] * Pulling base image v0.0.43-1711559786-18485 ...
	I0328 04:22:30.037394 3458290 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0328 04:22:30.037467 3458290 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0328 04:22:30.037479 3458290 cache.go:56] Caching tarball of preloaded images
	I0328 04:22:30.037618 3458290 preload.go:173] Found /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0328 04:22:30.037630 3458290 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0328 04:22:30.037781 3458290 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/config.json ...
	I0328 04:22:30.038061 3458290 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0328 04:22:30.058258 3458290 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon, skipping pull
	I0328 04:22:30.058287 3458290 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in daemon, skipping load
	I0328 04:22:30.058309 3458290 cache.go:194] Successfully downloaded all kic artifacts
	I0328 04:22:30.058341 3458290 start.go:360] acquireMachinesLock for default-k8s-diff-port-697565: {Name:mkbc366de86627b3619cfcf64e8dc27bd67690b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 04:22:30.058432 3458290 start.go:364] duration metric: took 63.433µs to acquireMachinesLock for "default-k8s-diff-port-697565"
	I0328 04:22:30.058461 3458290 start.go:96] Skipping create...Using existing machine configuration
	I0328 04:22:30.058472 3458290 fix.go:54] fixHost starting: 
	I0328 04:22:30.058766 3458290 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-697565 --format={{.State.Status}}
	I0328 04:22:30.082983 3458290 fix.go:112] recreateIfNeeded on default-k8s-diff-port-697565: state=Stopped err=<nil>
	W0328 04:22:30.083046 3458290 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 04:22:30.085442 3458290 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-697565" ...
	I0328 04:22:29.627868 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:32.127553 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:30.087784 3458290 cli_runner.go:164] Run: docker start default-k8s-diff-port-697565
	I0328 04:22:30.397669 3458290 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-697565 --format={{.State.Status}}
	I0328 04:22:30.416509 3458290 kic.go:430] container "default-k8s-diff-port-697565" state is running.
	I0328 04:22:30.418580 3458290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-697565
	I0328 04:22:30.437756 3458290 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/config.json ...
	I0328 04:22:30.438038 3458290 machine.go:94] provisionDockerMachine start ...
	I0328 04:22:30.438121 3458290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-697565
	I0328 04:22:30.472617 3458290 main.go:141] libmachine: Using SSH client type: native
	I0328 04:22:30.472942 3458290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36529 <nil> <nil>}
	I0328 04:22:30.472953 3458290 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 04:22:30.474449 3458290 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39922->127.0.0.1:36529: read: connection reset by peer
	I0328 04:22:33.615798 3458290 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-697565
	
	I0328 04:22:33.615855 3458290 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-697565"
	I0328 04:22:33.615939 3458290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-697565
	I0328 04:22:33.637532 3458290 main.go:141] libmachine: Using SSH client type: native
	I0328 04:22:33.637770 3458290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36529 <nil> <nil>}
	I0328 04:22:33.637786 3458290 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-697565 && echo "default-k8s-diff-port-697565" | sudo tee /etc/hostname
	I0328 04:22:33.789235 3458290 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-697565
	
	I0328 04:22:33.789324 3458290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-697565
	I0328 04:22:33.807257 3458290 main.go:141] libmachine: Using SSH client type: native
	I0328 04:22:33.807516 3458290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36529 <nil> <nil>}
	I0328 04:22:33.807542 3458290 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-697565' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-697565/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-697565' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 04:22:33.944643 3458290 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 04:22:33.944670 3458290 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18485-3249988/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-3249988/.minikube}
	I0328 04:22:33.944697 3458290 ubuntu.go:177] setting up certificates
	I0328 04:22:33.944715 3458290 provision.go:84] configureAuth start
	I0328 04:22:33.944788 3458290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-697565
	I0328 04:22:33.959832 3458290 provision.go:143] copyHostCerts
	I0328 04:22:33.959902 3458290 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.pem, removing ...
	I0328 04:22:33.959923 3458290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.pem
	I0328 04:22:33.960003 3458290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.pem (1078 bytes)
	I0328 04:22:33.960108 3458290 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-3249988/.minikube/cert.pem, removing ...
	I0328 04:22:33.960119 3458290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-3249988/.minikube/cert.pem
	I0328 04:22:33.960149 3458290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-3249988/.minikube/cert.pem (1123 bytes)
	I0328 04:22:33.960209 3458290 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-3249988/.minikube/key.pem, removing ...
	I0328 04:22:33.960219 3458290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-3249988/.minikube/key.pem
	I0328 04:22:33.960243 3458290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-3249988/.minikube/key.pem (1675 bytes)
	I0328 04:22:33.960295 3458290 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-697565 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-697565 localhost minikube]
	I0328 04:22:34.229193 3458290 provision.go:177] copyRemoteCerts
	I0328 04:22:34.229262 3458290 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 04:22:34.229317 3458290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-697565
	I0328 04:22:34.245314 3458290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36529 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/default-k8s-diff-port-697565/id_rsa Username:docker}
	I0328 04:22:34.345141 3458290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 04:22:34.373322 3458290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0328 04:22:34.399030 3458290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 04:22:34.424693 3458290 provision.go:87] duration metric: took 479.962322ms to configureAuth
	I0328 04:22:34.424761 3458290 ubuntu.go:193] setting minikube options for container-runtime
	I0328 04:22:34.424974 3458290 config.go:182] Loaded profile config "default-k8s-diff-port-697565": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 04:22:34.424988 3458290 machine.go:97] duration metric: took 3.986935831s to provisionDockerMachine
	I0328 04:22:34.424997 3458290 start.go:293] postStartSetup for "default-k8s-diff-port-697565" (driver="docker")
	I0328 04:22:34.425008 3458290 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 04:22:34.425063 3458290 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 04:22:34.425107 3458290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-697565
	I0328 04:22:34.440471 3458290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36529 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/default-k8s-diff-port-697565/id_rsa Username:docker}
	I0328 04:22:34.537412 3458290 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 04:22:34.540604 3458290 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0328 04:22:34.540641 3458290 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0328 04:22:34.540652 3458290 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0328 04:22:34.540660 3458290 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0328 04:22:34.540670 3458290 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-3249988/.minikube/addons for local assets ...
	I0328 04:22:34.540729 3458290 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-3249988/.minikube/files for local assets ...
	I0328 04:22:34.540834 3458290 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-3249988/.minikube/files/etc/ssl/certs/32553982.pem -> 32553982.pem in /etc/ssl/certs
	I0328 04:22:34.540949 3458290 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 04:22:34.549844 3458290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/files/etc/ssl/certs/32553982.pem --> /etc/ssl/certs/32553982.pem (1708 bytes)
	I0328 04:22:34.573176 3458290 start.go:296] duration metric: took 148.163854ms for postStartSetup
	I0328 04:22:34.573257 3458290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 04:22:34.573312 3458290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-697565
	I0328 04:22:34.592667 3458290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36529 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/default-k8s-diff-port-697565/id_rsa Username:docker}
	I0328 04:22:34.689523 3458290 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0328 04:22:34.693817 3458290 fix.go:56] duration metric: took 4.635338077s for fixHost
	I0328 04:22:34.693841 3458290 start.go:83] releasing machines lock for "default-k8s-diff-port-697565", held for 4.635396792s
	I0328 04:22:34.693919 3458290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-697565
	I0328 04:22:34.709271 3458290 ssh_runner.go:195] Run: cat /version.json
	I0328 04:22:34.709335 3458290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-697565
	I0328 04:22:34.709600 3458290 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 04:22:34.709651 3458290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-697565
	I0328 04:22:34.731977 3458290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36529 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/default-k8s-diff-port-697565/id_rsa Username:docker}
	I0328 04:22:34.737773 3458290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36529 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/default-k8s-diff-port-697565/id_rsa Username:docker}
	I0328 04:22:34.823716 3458290 ssh_runner.go:195] Run: systemctl --version
	I0328 04:22:34.958026 3458290 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 04:22:34.962606 3458290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0328 04:22:34.980772 3458290 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0328 04:22:34.980884 3458290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 04:22:34.990371 3458290 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0328 04:22:34.990393 3458290 start.go:494] detecting cgroup driver to use...
	I0328 04:22:34.990425 3458290 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0328 04:22:34.990486 3458290 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 04:22:35.017195 3458290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 04:22:35.038109 3458290 docker.go:217] disabling cri-docker service (if available) ...
	I0328 04:22:35.038203 3458290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 04:22:35.052664 3458290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 04:22:35.065631 3458290 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 04:22:35.150607 3458290 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 04:22:35.242691 3458290 docker.go:233] disabling docker service ...
	I0328 04:22:35.242754 3458290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 04:22:35.257359 3458290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 04:22:35.269684 3458290 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 04:22:35.356412 3458290 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 04:22:35.439436 3458290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 04:22:35.452002 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 04:22:35.473667 3458290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0328 04:22:35.485649 3458290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 04:22:35.497441 3458290 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 04:22:35.497546 3458290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 04:22:35.507695 3458290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 04:22:35.518055 3458290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 04:22:35.528101 3458290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 04:22:35.538191 3458290 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 04:22:35.547921 3458290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 04:22:35.558704 3458290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 04:22:35.569345 3458290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 04:22:35.581650 3458290 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 04:22:35.590779 3458290 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 04:22:35.599612 3458290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 04:22:35.694572 3458290 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 04:22:35.874678 3458290 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0328 04:22:35.874862 3458290 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0328 04:22:35.879077 3458290 start.go:562] Will wait 60s for crictl version
	I0328 04:22:35.879175 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:22:35.883339 3458290 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 04:22:35.922082 3458290 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0328 04:22:35.922209 3458290 ssh_runner.go:195] Run: containerd --version
	I0328 04:22:35.945519 3458290 ssh_runner.go:195] Run: containerd --version
	I0328 04:22:35.971553 3458290 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.6.28 ...
	I0328 04:22:35.974389 3458290 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-697565 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0328 04:22:35.989023 3458290 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0328 04:22:35.992807 3458290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 04:22:36.005031 3458290 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-697565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-697565 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 04:22:36.005189 3458290 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0328 04:22:36.005254 3458290 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 04:22:36.046687 3458290 containerd.go:627] all images are preloaded for containerd runtime.
	I0328 04:22:36.046713 3458290 containerd.go:534] Images already preloaded, skipping extraction
	I0328 04:22:36.046782 3458290 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 04:22:36.083517 3458290 containerd.go:627] all images are preloaded for containerd runtime.
	I0328 04:22:36.083542 3458290 cache_images.go:84] Images are preloaded, skipping loading
	I0328 04:22:36.083556 3458290 kubeadm.go:928] updating node { 192.168.76.2 8444 v1.29.3 containerd true true} ...
	I0328 04:22:36.083684 3458290 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-697565 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-697565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 04:22:36.083755 3458290 ssh_runner.go:195] Run: sudo crictl info
	I0328 04:22:36.134168 3458290 cni.go:84] Creating CNI manager for ""
	I0328 04:22:36.134195 3458290 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 04:22:36.134205 3458290 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 04:22:36.134258 3458290 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-697565 NodeName:default-k8s-diff-port-697565 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 04:22:36.134451 3458290 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-697565"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 04:22:36.134546 3458290 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 04:22:36.144472 3458290 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 04:22:36.144560 3458290 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 04:22:36.153509 3458290 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0328 04:22:36.172116 3458290 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 04:22:36.192015 3458290 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2182 bytes)
	I0328 04:22:36.224546 3458290 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0328 04:22:36.228611 3458290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 04:22:36.241188 3458290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 04:22:36.335722 3458290 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 04:22:36.357890 3458290 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565 for IP: 192.168.76.2
	I0328 04:22:36.357912 3458290 certs.go:194] generating shared ca certs ...
	I0328 04:22:36.357928 3458290 certs.go:226] acquiring lock for ca certs: {Name:mk654727350d982ceeedd640f586ca1658e18559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 04:22:36.358057 3458290 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.key
	I0328 04:22:36.358108 3458290 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.key
	I0328 04:22:36.358119 3458290 certs.go:256] generating profile certs ...
	I0328 04:22:36.358197 3458290 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.key
	I0328 04:22:36.358267 3458290 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/apiserver.key.79a12f43
	I0328 04:22:36.358330 3458290 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/proxy-client.key
	I0328 04:22:36.358456 3458290 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/3255398.pem (1338 bytes)
	W0328 04:22:36.358491 3458290 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/3255398_empty.pem, impossibly tiny 0 bytes
	I0328 04:22:36.358504 3458290 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca-key.pem (1679 bytes)
	I0328 04:22:36.358529 3458290 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/ca.pem (1078 bytes)
	I0328 04:22:36.358555 3458290 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/cert.pem (1123 bytes)
	I0328 04:22:36.358583 3458290 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/key.pem (1675 bytes)
	I0328 04:22:36.358632 3458290 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-3249988/.minikube/files/etc/ssl/certs/32553982.pem (1708 bytes)
	I0328 04:22:36.359300 3458290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 04:22:36.394627 3458290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 04:22:36.419191 3458290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 04:22:36.443823 3458290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 04:22:36.469138 3458290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0328 04:22:36.495705 3458290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 04:22:36.529277 3458290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 04:22:36.556146 3458290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 04:22:36.591325 3458290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 04:22:36.617946 3458290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/certs/3255398.pem --> /usr/share/ca-certificates/3255398.pem (1338 bytes)
	I0328 04:22:36.648219 3458290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-3249988/.minikube/files/etc/ssl/certs/32553982.pem --> /usr/share/ca-certificates/32553982.pem (1708 bytes)
	I0328 04:22:36.675589 3458290 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 04:22:36.694895 3458290 ssh_runner.go:195] Run: openssl version
	I0328 04:22:36.700290 3458290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32553982.pem && ln -fs /usr/share/ca-certificates/32553982.pem /etc/ssl/certs/32553982.pem"
	I0328 04:22:36.710382 3458290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32553982.pem
	I0328 04:22:36.713875 3458290 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 28 03:39 /usr/share/ca-certificates/32553982.pem
	I0328 04:22:36.713944 3458290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32553982.pem
	I0328 04:22:36.720822 3458290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32553982.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 04:22:36.729993 3458290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 04:22:36.739283 3458290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 04:22:36.742616 3458290 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 28 03:33 /usr/share/ca-certificates/minikubeCA.pem
	I0328 04:22:36.742697 3458290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 04:22:36.749461 3458290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 04:22:36.759237 3458290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3255398.pem && ln -fs /usr/share/ca-certificates/3255398.pem /etc/ssl/certs/3255398.pem"
	I0328 04:22:36.769110 3458290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3255398.pem
	I0328 04:22:36.772579 3458290 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 28 03:39 /usr/share/ca-certificates/3255398.pem
	I0328 04:22:36.772668 3458290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3255398.pem
	I0328 04:22:36.780746 3458290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3255398.pem /etc/ssl/certs/51391683.0"
	I0328 04:22:36.789684 3458290 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 04:22:36.793080 3458290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 04:22:36.800034 3458290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 04:22:36.807180 3458290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 04:22:36.813894 3458290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 04:22:36.820837 3458290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 04:22:36.827732 3458290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 04:22:36.834874 3458290 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-697565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-697565 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mou
ntString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 04:22:36.834978 3458290 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0328 04:22:36.835042 3458290 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 04:22:36.874836 3458290 cri.go:89] found id: "98dfc5e3f3e768f015a7775e0e88f74415eb603f19cf42740d0493aed17a0ca8"
	I0328 04:22:36.874900 3458290 cri.go:89] found id: "d1c10640f82f224e4bfa16dfd16f19a0ac9f7bf2024ba3cef766c891b17bd4e5"
	I0328 04:22:36.874921 3458290 cri.go:89] found id: "392c0c82afe69c32edd37ac7eb093f1c137f825232f1530d0e64d2b8380c11d6"
	I0328 04:22:36.874931 3458290 cri.go:89] found id: "a95108a9241a2df26dfe727dff570eeaac3d2a4f6b109a27c67dcc8e48682265"
	I0328 04:22:36.874936 3458290 cri.go:89] found id: "d4ade15f2033664cfc7e9e2fe0c319dc3136c8a375169b3df234ac6fd61e284e"
	I0328 04:22:36.874942 3458290 cri.go:89] found id: "3f0e618e442c98e8c758dcf681167c817b1037dfa4bfa4215739878296af4a74"
	I0328 04:22:36.874951 3458290 cri.go:89] found id: "6fdf6f988a58a053f1a34e4a124be068272f9ae033291175c1769e6593c1f24e"
	I0328 04:22:36.874954 3458290 cri.go:89] found id: "c3b0cf249a97bdf75a6df5d2ba318c6846a48b18cf15064914aca399c4cde5ac"
	I0328 04:22:36.874958 3458290 cri.go:89] found id: ""
	I0328 04:22:36.875011 3458290 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0328 04:22:36.887447 3458290 cri.go:116] JSON = null
	W0328 04:22:36.887553 3458290 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0328 04:22:36.887638 3458290 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 04:22:36.896489 3458290 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 04:22:36.896513 3458290 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 04:22:36.896519 3458290 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 04:22:36.896605 3458290 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 04:22:36.905190 3458290 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 04:22:36.905793 3458290 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-697565" does not appear in /home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 04:22:36.906055 3458290 kubeconfig.go:62] /home/jenkins/minikube-integration/18485-3249988/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-697565" cluster setting kubeconfig missing "default-k8s-diff-port-697565" context setting]
	I0328 04:22:36.906527 3458290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/kubeconfig: {Name:mkf778b20fa7ee9827f7d3539ae3fbccd66af6b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 04:22:36.907904 3458290 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 04:22:36.916452 3458290 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0328 04:22:36.916524 3458290 kubeadm.go:591] duration metric: took 19.999318ms to restartPrimaryControlPlane
	I0328 04:22:36.916539 3458290 kubeadm.go:393] duration metric: took 81.673705ms to StartCluster
	I0328 04:22:36.916555 3458290 settings.go:142] acquiring lock: {Name:mkc9f345268bcac5ebc4aa579f709fe3221112b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 04:22:36.916633 3458290 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 04:22:36.917613 3458290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/kubeconfig: {Name:mkf778b20fa7ee9827f7d3539ae3fbccd66af6b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 04:22:36.917851 3458290 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0328 04:22:36.922165 3458290 out.go:177] * Verifying Kubernetes components...
	I0328 04:22:36.918064 3458290 config.go:182] Loaded profile config "default-k8s-diff-port-697565": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 04:22:36.918087 3458290 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 04:22:36.924381 3458290 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-697565"
	I0328 04:22:36.924412 3458290 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-697565"
	W0328 04:22:36.924426 3458290 addons.go:243] addon storage-provisioner should already be in state true
	I0328 04:22:36.924454 3458290 host.go:66] Checking if "default-k8s-diff-port-697565" exists ...
	I0328 04:22:36.924499 3458290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 04:22:36.924610 3458290 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-697565"
	I0328 04:22:36.924656 3458290 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-697565"
	W0328 04:22:36.924677 3458290 addons.go:243] addon dashboard should already be in state true
	I0328 04:22:36.924734 3458290 host.go:66] Checking if "default-k8s-diff-port-697565" exists ...
	I0328 04:22:36.924934 3458290 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-697565 --format={{.State.Status}}
	I0328 04:22:36.925212 3458290 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-697565 --format={{.State.Status}}
	I0328 04:22:36.928782 3458290 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-697565"
	I0328 04:22:36.928878 3458290 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-697565"
	W0328 04:22:36.928891 3458290 addons.go:243] addon metrics-server should already be in state true
	I0328 04:22:36.928932 3458290 host.go:66] Checking if "default-k8s-diff-port-697565" exists ...
	I0328 04:22:36.929349 3458290 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-697565 --format={{.State.Status}}
	I0328 04:22:36.928311 3458290 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-697565"
	I0328 04:22:36.929520 3458290 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-697565"
	I0328 04:22:36.929763 3458290 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-697565 --format={{.State.Status}}
	I0328 04:22:36.955128 3458290 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 04:22:36.959089 3458290 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 04:22:36.959112 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 04:22:36.959179 3458290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-697565
	I0328 04:22:36.980140 3458290 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0328 04:22:36.982300 3458290 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0328 04:22:36.984150 3458290 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0328 04:22:36.984172 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0328 04:22:36.984235 3458290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-697565
	I0328 04:22:37.007391 3458290 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-697565"
	W0328 04:22:37.007427 3458290 addons.go:243] addon default-storageclass should already be in state true
	I0328 04:22:37.007459 3458290 host.go:66] Checking if "default-k8s-diff-port-697565" exists ...
	I0328 04:22:37.007874 3458290 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-697565 --format={{.State.Status}}
	I0328 04:22:37.016020 3458290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36529 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/default-k8s-diff-port-697565/id_rsa Username:docker}
	I0328 04:22:37.040291 3458290 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 04:22:34.128755 3452995 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:35.626985 3452995 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace has status "Ready":"True"
	I0328 04:22:35.627012 3452995 pod_ready.go:81] duration metric: took 1m17.505809442s for pod "kube-scheduler-old-k8s-version-140381" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:35.627025 3452995 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:37.045684 3458290 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 04:22:37.045709 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 04:22:37.045775 3458290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-697565
	I0328 04:22:37.052618 3458290 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 04:22:37.052641 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 04:22:37.052702 3458290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-697565
	I0328 04:22:37.070432 3458290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36529 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/default-k8s-diff-port-697565/id_rsa Username:docker}
	I0328 04:22:37.090103 3458290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36529 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/default-k8s-diff-port-697565/id_rsa Username:docker}
	I0328 04:22:37.096094 3458290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36529 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/default-k8s-diff-port-697565/id_rsa Username:docker}
	I0328 04:22:37.138895 3458290 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 04:22:37.213762 3458290 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-697565" to be "Ready" ...
	I0328 04:22:37.336472 3458290 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0328 04:22:37.336499 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0328 04:22:37.338082 3458290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 04:22:37.392136 3458290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 04:22:37.445674 3458290 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0328 04:22:37.445697 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0328 04:22:37.504701 3458290 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 04:22:37.504723 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	W0328 04:22:37.613164 3458290 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0328 04:22:37.613251 3458290 retry.go:31] will retry after 354.788525ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0328 04:22:37.614459 3458290 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0328 04:22:37.614494 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0328 04:22:37.641070 3458290 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0328 04:22:37.641099 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0328 04:22:37.696856 3458290 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0328 04:22:37.696891 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0328 04:22:37.770529 3458290 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 04:22:37.770556 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 04:22:37.867371 3458290 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 04:22:37.867402 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 04:22:37.891618 3458290 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0328 04:22:37.891645 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0328 04:22:37.962196 3458290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 04:22:37.968664 3458290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 04:22:38.014972 3458290 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0328 04:22:38.015006 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0328 04:22:38.187311 3458290 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0328 04:22:38.187338 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0328 04:22:38.342333 3458290 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 04:22:38.342410 3458290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0328 04:22:38.390414 3458290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 04:22:37.633133 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:39.633943 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:41.634151 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:42.185348 3458290 node_ready.go:49] node "default-k8s-diff-port-697565" has status "Ready":"True"
	I0328 04:22:42.185378 3458290 node_ready.go:38] duration metric: took 4.971581658s for node "default-k8s-diff-port-697565" to be "Ready" ...
	I0328 04:22:42.185389 3458290 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 04:22:42.305115 3458290 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-tprj9" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:42.363269 3458290 pod_ready.go:92] pod "coredns-76f75df574-tprj9" in "kube-system" namespace has status "Ready":"True"
	I0328 04:22:42.363293 3458290 pod_ready.go:81] duration metric: took 58.134047ms for pod "coredns-76f75df574-tprj9" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:42.363305 3458290 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-697565" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:42.413432 3458290 pod_ready.go:92] pod "etcd-default-k8s-diff-port-697565" in "kube-system" namespace has status "Ready":"True"
	I0328 04:22:42.413505 3458290 pod_ready.go:81] duration metric: took 50.190979ms for pod "etcd-default-k8s-diff-port-697565" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:42.413534 3458290 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-697565" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:42.467685 3458290 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-697565" in "kube-system" namespace has status "Ready":"True"
	I0328 04:22:42.467712 3458290 pod_ready.go:81] duration metric: took 54.155744ms for pod "kube-apiserver-default-k8s-diff-port-697565" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:42.467726 3458290 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-697565" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:42.478796 3458290 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-697565" in "kube-system" namespace has status "Ready":"True"
	I0328 04:22:42.478827 3458290 pod_ready.go:81] duration metric: took 11.090697ms for pod "kube-controller-manager-default-k8s-diff-port-697565" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:42.478842 3458290 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wjnmv" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:42.486835 3458290 pod_ready.go:92] pod "kube-proxy-wjnmv" in "kube-system" namespace has status "Ready":"True"
	I0328 04:22:42.486869 3458290 pod_ready.go:81] duration metric: took 8.019471ms for pod "kube-proxy-wjnmv" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:42.486883 3458290 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-697565" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:42.665566 3458290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.273393014s)
	I0328 04:22:44.494665 3458290 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-697565" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:45.319607 3458290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.357333935s)
	I0328 04:22:45.319726 3458290 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-697565"
	I0328 04:22:45.441168 3458290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.472456787s)
	I0328 04:22:45.605750 3458290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.21524486s)
	I0328 04:22:45.607677 3458290 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-697565 addons enable metrics-server
	
	I0328 04:22:45.609575 3458290 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0328 04:22:43.634463 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:46.133199 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:45.611413 3458290 addons.go:505] duration metric: took 8.693321789s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0328 04:22:46.994678 3458290 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-697565" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:49.493889 3458290 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-697565" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:48.134087 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:50.633505 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:51.993171 3458290 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-697565" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:52.994341 3458290 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-697565" in "kube-system" namespace has status "Ready":"True"
	I0328 04:22:52.994363 3458290 pod_ready.go:81] duration metric: took 10.507471899s for pod "kube-scheduler-default-k8s-diff-port-697565" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:52.994375 3458290 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace to be "Ready" ...
	I0328 04:22:53.134317 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:55.633629 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:55.001609 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:57.002613 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:59.011712 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:22:58.134524 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:00.173150 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:01.502727 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:04.002173 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:02.634534 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:05.133784 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:07.134026 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:06.002420 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:08.500656 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:09.135534 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:11.633121 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:10.501028 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:12.501560 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:14.133586 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:16.133767 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:15.005061 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:17.007854 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:19.009461 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:18.633041 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:20.633744 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:21.501259 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:24.001328 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:23.133287 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:25.142936 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:26.007925 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:28.501251 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:27.632977 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:29.633288 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:31.633344 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:31.000932 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:33.003756 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:33.639199 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:36.133366 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:35.010780 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:37.500018 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:39.500802 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:38.632826 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:40.633035 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:41.502140 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:44.011550 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:43.134644 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:45.154998 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:46.501691 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:49.003119 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:47.633745 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:50.133728 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:52.134043 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:51.500887 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:54.001534 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:54.632779 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:57.133224 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:56.501935 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:59.001555 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:23:59.134165 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:01.135735 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:01.501928 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:04.000445 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:03.633487 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:06.136789 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:06.011009 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:08.501591 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:08.632796 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:10.633557 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:11.005637 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:13.500677 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:13.133272 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:15.133545 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:15.501537 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:18.005020 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:17.633414 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:20.133094 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:20.504128 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:23.001061 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:22.632566 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:24.633785 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:27.134831 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:25.005445 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:27.501206 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:29.637612 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:31.653424 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:30.017396 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:32.501226 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:34.501420 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:34.133274 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:36.133875 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:36.501855 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:39.001266 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:38.134588 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:40.633596 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:41.005238 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:43.500660 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:43.133594 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:45.135097 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:46.005265 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:48.501608 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:47.633333 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:50.134044 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:51.001632 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:53.004819 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:52.633570 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:54.633700 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:57.133327 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:55.501126 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:57.501973 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:24:59.633362 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:01.634413 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:00.018427 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:02.501016 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:04.501223 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:04.133368 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:06.134455 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:07.003367 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:09.004653 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:08.633303 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:10.633495 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:11.005211 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:13.501914 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:13.133566 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:15.633161 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:16.002866 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:18.501350 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:17.634906 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:20.133383 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:21.000832 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:23.005154 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:22.633441 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:25.134614 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:27.135731 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:25.500415 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:27.503337 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:29.633283 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:31.633405 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:30.002526 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:32.003860 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:34.004072 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:33.633618 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:36.133429 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:36.501129 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:38.501460 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:38.133583 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:40.134666 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:41.001191 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:43.005323 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:42.633560 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:44.634790 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:47.133754 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:45.019250 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:47.501223 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:49.633444 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:51.633901 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:50.004085 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:52.005789 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:54.500592 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:54.133469 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:56.134183 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:56.501336 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:58.507307 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:25:58.633378 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:00.633489 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:01.001118 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:03.005564 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:03.132760 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:05.133238 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:07.133969 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:05.005789 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:07.006627 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:09.500542 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:09.633775 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:11.633937 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:11.501723 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:14.003522 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:14.133712 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:16.134639 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:16.500216 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:18.500730 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:18.634868 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:21.134284 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:20.501146 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:22.502179 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:23.633293 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:25.634070 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:25.001230 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:27.501518 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:27.634419 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:30.133907 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:30.005997 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:32.501121 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:34.501154 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:32.633477 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:34.633772 3452995 pod_ready.go:102] pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:35.633538 3452995 pod_ready.go:81] duration metric: took 4m0.006497865s for pod "metrics-server-9975d5f86-cccxt" in "kube-system" namespace to be "Ready" ...
	E0328 04:26:35.633563 3452995 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0328 04:26:35.633573 3452995 pod_ready.go:38] duration metric: took 5m25.472480946s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 04:26:35.633587 3452995 api_server.go:52] waiting for apiserver process to appear ...
	I0328 04:26:35.633665 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0328 04:26:35.633744 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 04:26:35.694891 3452995 cri.go:89] found id: "5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d"
	I0328 04:26:35.694916 3452995 cri.go:89] found id: "1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16"
	I0328 04:26:35.694920 3452995 cri.go:89] found id: ""
	I0328 04:26:35.694927 3452995 logs.go:276] 2 containers: [5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d 1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16]
	I0328 04:26:35.694991 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.698567 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.701837 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0328 04:26:35.701907 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 04:26:35.743981 3452995 cri.go:89] found id: "332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864"
	I0328 04:26:35.744068 3452995 cri.go:89] found id: "1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f"
	I0328 04:26:35.744088 3452995 cri.go:89] found id: ""
	I0328 04:26:35.744115 3452995 logs.go:276] 2 containers: [332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864 1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f]
	I0328 04:26:35.744225 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.748727 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.752998 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0328 04:26:35.753121 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 04:26:35.796998 3452995 cri.go:89] found id: "8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000"
	I0328 04:26:35.797072 3452995 cri.go:89] found id: "af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22"
	I0328 04:26:35.797083 3452995 cri.go:89] found id: ""
	I0328 04:26:35.797106 3452995 logs.go:276] 2 containers: [8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000 af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22]
	I0328 04:26:35.797166 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.802331 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.805733 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0328 04:26:35.805844 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 04:26:35.845666 3452995 cri.go:89] found id: "42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186"
	I0328 04:26:35.845689 3452995 cri.go:89] found id: "105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6"
	I0328 04:26:35.845694 3452995 cri.go:89] found id: ""
	I0328 04:26:35.845701 3452995 logs.go:276] 2 containers: [42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186 105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6]
	I0328 04:26:35.845758 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.849463 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.852714 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0328 04:26:35.852788 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 04:26:35.898906 3452995 cri.go:89] found id: "99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d"
	I0328 04:26:35.898929 3452995 cri.go:89] found id: "4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508"
	I0328 04:26:35.898934 3452995 cri.go:89] found id: ""
	I0328 04:26:35.898942 3452995 logs.go:276] 2 containers: [99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d 4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508]
	I0328 04:26:35.899000 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.902499 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.905696 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 04:26:35.905765 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 04:26:35.942630 3452995 cri.go:89] found id: "0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0"
	I0328 04:26:35.942654 3452995 cri.go:89] found id: "8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd"
	I0328 04:26:35.942659 3452995 cri.go:89] found id: ""
	I0328 04:26:35.942666 3452995 logs.go:276] 2 containers: [0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0 8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd]
	I0328 04:26:35.942722 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.946636 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.950203 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0328 04:26:35.950337 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 04:26:35.989298 3452995 cri.go:89] found id: "225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b"
	I0328 04:26:35.989320 3452995 cri.go:89] found id: "ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff"
	I0328 04:26:35.989325 3452995 cri.go:89] found id: ""
	I0328 04:26:35.989332 3452995 logs.go:276] 2 containers: [225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff]
	I0328 04:26:35.989409 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.993106 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:35.997025 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0328 04:26:35.997151 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0328 04:26:36.039193 3452995 cri.go:89] found id: "3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20"
	I0328 04:26:36.039217 3452995 cri.go:89] found id: "b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a"
	I0328 04:26:36.039223 3452995 cri.go:89] found id: ""
	I0328 04:26:36.039230 3452995 logs.go:276] 2 containers: [3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20 b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a]
	I0328 04:26:36.039310 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:36.043435 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:36.047137 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 04:26:36.047238 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 04:26:36.090628 3452995 cri.go:89] found id: "ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0"
	I0328 04:26:36.090697 3452995 cri.go:89] found id: ""
	I0328 04:26:36.090720 3452995 logs.go:276] 1 containers: [ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0]
	I0328 04:26:36.090788 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:36.094637 3452995 logs.go:123] Gathering logs for kindnet [225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b] ...
	I0328 04:26:36.094667 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b"
	I0328 04:26:36.140186 3452995 logs.go:123] Gathering logs for kubernetes-dashboard [ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0] ...
	I0328 04:26:36.140217 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0"
	I0328 04:26:36.190206 3452995 logs.go:123] Gathering logs for dmesg ...
	I0328 04:26:36.190233 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 04:26:36.216448 3452995 logs.go:123] Gathering logs for coredns [8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000] ...
	I0328 04:26:36.216478 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000"
	I0328 04:26:36.256102 3452995 logs.go:123] Gathering logs for kube-scheduler [105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6] ...
	I0328 04:26:36.256180 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6"
	I0328 04:26:36.303265 3452995 logs.go:123] Gathering logs for kube-proxy [4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508] ...
	I0328 04:26:36.303296 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508"
	I0328 04:26:36.343365 3452995 logs.go:123] Gathering logs for kube-controller-manager [8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd] ...
	I0328 04:26:36.343391 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd"
	I0328 04:26:36.427546 3452995 logs.go:123] Gathering logs for storage-provisioner [3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20] ...
	I0328 04:26:36.427584 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20"
	I0328 04:26:36.471558 3452995 logs.go:123] Gathering logs for describe nodes ...
	I0328 04:26:36.471595 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 04:26:36.685179 3452995 logs.go:123] Gathering logs for kube-apiserver [5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d] ...
	I0328 04:26:36.685206 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d"
	I0328 04:26:36.747646 3452995 logs.go:123] Gathering logs for coredns [af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22] ...
	I0328 04:26:36.747680 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22"
	I0328 04:26:36.795667 3452995 logs.go:123] Gathering logs for kube-controller-manager [0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0] ...
	I0328 04:26:36.795697 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0"
	I0328 04:26:36.854031 3452995 logs.go:123] Gathering logs for storage-provisioner [b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a] ...
	I0328 04:26:36.854064 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a"
	I0328 04:26:36.890779 3452995 logs.go:123] Gathering logs for containerd ...
	I0328 04:26:36.890807 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0328 04:26:36.953870 3452995 logs.go:123] Gathering logs for kube-apiserver [1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16] ...
	I0328 04:26:36.953905 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16"
	I0328 04:26:37.016479 3452995 logs.go:123] Gathering logs for etcd [332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864] ...
	I0328 04:26:37.016517 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864"
	I0328 04:26:37.068423 3452995 logs.go:123] Gathering logs for kube-scheduler [42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186] ...
	I0328 04:26:37.068451 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186"
	I0328 04:26:37.110510 3452995 logs.go:123] Gathering logs for kindnet [ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff] ...
	I0328 04:26:37.110539 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff"
	I0328 04:26:37.159372 3452995 logs.go:123] Gathering logs for container status ...
	I0328 04:26:37.159404 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 04:26:36.501574 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:38.501996 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:37.248698 3452995 logs.go:123] Gathering logs for kubelet ...
	I0328 04:26:37.249484 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 04:26:37.310175 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184708     666 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.310412 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184830     666 reflector.go:138] object-"kube-system"/"kindnet-token-sg2gk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-sg2gk" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.310631 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184889     666 reflector.go:138] object-"kube-system"/"coredns-token-ssc6f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-ssc6f" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.310833 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184963     666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.311233 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185036     666 reflector.go:138] object-"kube-system"/"storage-provisioner-token-vl9zr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-vl9zr" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.311495 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185101     666 reflector.go:138] object-"kube-system"/"kube-proxy-token-2bzlj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2bzlj" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.312109 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185160     666 reflector.go:138] object-"default"/"default-token-pcfrx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pcfrx" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.312415 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185207     666 reflector.go:138] object-"kube-system"/"metrics-server-token-zcjpk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-zcjpk" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:37.321599 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:12 old-k8s-version-140381 kubelet[666]: E0328 04:21:12.343501     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:37.322000 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:12 old-k8s-version-140381 kubelet[666]: E0328 04:21:12.682344     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.326008 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:27 old-k8s-version-140381 kubelet[666]: E0328 04:21:27.214018     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:37.328090 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:37 old-k8s-version-140381 kubelet[666]: E0328 04:21:37.780553     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.328974 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:38 old-k8s-version-140381 kubelet[666]: E0328 04:21:38.784869     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.329171 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:39 old-k8s-version-140381 kubelet[666]: E0328 04:21:39.202052     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.329501 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:40 old-k8s-version-140381 kubelet[666]: E0328 04:21:40.038234     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.329942 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:42 old-k8s-version-140381 kubelet[666]: E0328 04:21:42.795786     666 pod_workers.go:191] Error syncing pod 12216018-fd85-43f2-8766-9091100b1b60 ("storage-provisioner_kube-system(12216018-fd85-43f2-8766-9091100b1b60)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(12216018-fd85-43f2-8766-9091100b1b60)"
	W0328 04:26:37.332865 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:52 old-k8s-version-140381 kubelet[666]: E0328 04:21:52.216611     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:37.333200 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:52 old-k8s-version-140381 kubelet[666]: E0328 04:21:52.819852     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.333789 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:00 old-k8s-version-140381 kubelet[666]: E0328 04:22:00.038034     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.333973 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:05 old-k8s-version-140381 kubelet[666]: E0328 04:22:05.202143     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.334559 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:13 old-k8s-version-140381 kubelet[666]: E0328 04:22:13.870530     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.334746 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:17 old-k8s-version-140381 kubelet[666]: E0328 04:22:17.239591     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.335074 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:20 old-k8s-version-140381 kubelet[666]: E0328 04:22:20.037953     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.335260 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:30 old-k8s-version-140381 kubelet[666]: E0328 04:22:30.204576     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.335590 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:31 old-k8s-version-140381 kubelet[666]: E0328 04:22:31.201687     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.335920 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:43 old-k8s-version-140381 kubelet[666]: E0328 04:22:43.201963     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.339689 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:44 old-k8s-version-140381 kubelet[666]: E0328 04:22:44.231952     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:37.340293 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:55 old-k8s-version-140381 kubelet[666]: E0328 04:22:55.985948     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.340488 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:57 old-k8s-version-140381 kubelet[666]: E0328 04:22:57.202196     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.340817 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:00 old-k8s-version-140381 kubelet[666]: E0328 04:23:00.039239     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.341007 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:08 old-k8s-version-140381 kubelet[666]: E0328 04:23:08.203565     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.341402 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:14 old-k8s-version-140381 kubelet[666]: E0328 04:23:14.202407     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.341591 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:19 old-k8s-version-140381 kubelet[666]: E0328 04:23:19.201996     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.341922 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:25 old-k8s-version-140381 kubelet[666]: E0328 04:23:25.201722     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.342106 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:33 old-k8s-version-140381 kubelet[666]: E0328 04:23:33.201910     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.342435 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:40 old-k8s-version-140381 kubelet[666]: E0328 04:23:40.205750     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.342620 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:45 old-k8s-version-140381 kubelet[666]: E0328 04:23:45.202153     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.342948 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:51 old-k8s-version-140381 kubelet[666]: E0328 04:23:51.202418     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.343131 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:59 old-k8s-version-140381 kubelet[666]: E0328 04:23:59.202144     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.343460 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:02 old-k8s-version-140381 kubelet[666]: E0328 04:24:02.205580     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.345967 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:13 old-k8s-version-140381 kubelet[666]: E0328 04:24:13.211613     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:37.346305 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:15 old-k8s-version-140381 kubelet[666]: E0328 04:24:15.201704     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.346625 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:26 old-k8s-version-140381 kubelet[666]: E0328 04:24:26.205065     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.347105 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:27 old-k8s-version-140381 kubelet[666]: E0328 04:24:27.190060     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.347477 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:30 old-k8s-version-140381 kubelet[666]: E0328 04:24:30.038061     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.347666 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:41 old-k8s-version-140381 kubelet[666]: E0328 04:24:41.202037     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.347993 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:42 old-k8s-version-140381 kubelet[666]: E0328 04:24:42.202188     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.348180 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:54 old-k8s-version-140381 kubelet[666]: E0328 04:24:54.202037     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.348519 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:56 old-k8s-version-140381 kubelet[666]: E0328 04:24:56.201866     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.348704 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:08 old-k8s-version-140381 kubelet[666]: E0328 04:25:08.203049     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.349034 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:09 old-k8s-version-140381 kubelet[666]: E0328 04:25:09.202098     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.349219 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:20 old-k8s-version-140381 kubelet[666]: E0328 04:25:20.202174     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.349552 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:24 old-k8s-version-140381 kubelet[666]: E0328 04:25:24.201883     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.349736 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:34 old-k8s-version-140381 kubelet[666]: E0328 04:25:34.207786     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.350066 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:37 old-k8s-version-140381 kubelet[666]: E0328 04:25:37.201754     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.350250 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:45 old-k8s-version-140381 kubelet[666]: E0328 04:25:45.204388     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.350577 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:50 old-k8s-version-140381 kubelet[666]: E0328 04:25:50.205165     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.350761 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:58 old-k8s-version-140381 kubelet[666]: E0328 04:25:58.202750     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.351089 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:01 old-k8s-version-140381 kubelet[666]: E0328 04:26:01.201757     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.351274 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:12 old-k8s-version-140381 kubelet[666]: E0328 04:26:12.202992     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.352357 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:14 old-k8s-version-140381 kubelet[666]: E0328 04:26:14.202954     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.352556 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:23 old-k8s-version-140381 kubelet[666]: E0328 04:26:23.202145     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.352892 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:26 old-k8s-version-140381 kubelet[666]: E0328 04:26:26.202120     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	I0328 04:26:37.352907 3452995 logs.go:123] Gathering logs for etcd [1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f] ...
	I0328 04:26:37.352921 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f"
	I0328 04:26:37.405589 3452995 logs.go:123] Gathering logs for kube-proxy [99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d] ...
	I0328 04:26:37.405617 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d"
	I0328 04:26:37.445134 3452995 out.go:304] Setting ErrFile to fd 2...
	I0328 04:26:37.445160 3452995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 04:26:37.445216 3452995 out.go:239] X Problems detected in kubelet:
	W0328 04:26:37.445227 3452995 out.go:239]   Mar 28 04:26:01 old-k8s-version-140381 kubelet[666]: E0328 04:26:01.201757     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.445242 3452995 out.go:239]   Mar 28 04:26:12 old-k8s-version-140381 kubelet[666]: E0328 04:26:12.202992     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.445251 3452995 out.go:239]   Mar 28 04:26:14 old-k8s-version-140381 kubelet[666]: E0328 04:26:14.202954     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:37.445281 3452995 out.go:239]   Mar 28 04:26:23 old-k8s-version-140381 kubelet[666]: E0328 04:26:23.202145     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:37.445288 3452995 out.go:239]   Mar 28 04:26:26 old-k8s-version-140381 kubelet[666]: E0328 04:26:26.202120     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	I0328 04:26:37.445297 3452995 out.go:304] Setting ErrFile to fd 2...
	I0328 04:26:37.445303 3452995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 04:26:41.005218 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:43.500764 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:46.000922 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:48.002727 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:47.446294 3452995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 04:26:47.461377 3452995 api_server.go:72] duration metric: took 5m57.453447459s to wait for apiserver process to appear ...
	I0328 04:26:47.461406 3452995 api_server.go:88] waiting for apiserver healthz status ...
	I0328 04:26:47.461449 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0328 04:26:47.461515 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 04:26:47.505663 3452995 cri.go:89] found id: "5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d"
	I0328 04:26:47.505684 3452995 cri.go:89] found id: "1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16"
	I0328 04:26:47.505690 3452995 cri.go:89] found id: ""
	I0328 04:26:47.505697 3452995 logs.go:276] 2 containers: [5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d 1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16]
	I0328 04:26:47.505753 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.509540 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.513884 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0328 04:26:47.513961 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 04:26:47.563757 3452995 cri.go:89] found id: "332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864"
	I0328 04:26:47.563778 3452995 cri.go:89] found id: "1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f"
	I0328 04:26:47.563783 3452995 cri.go:89] found id: ""
	I0328 04:26:47.563790 3452995 logs.go:276] 2 containers: [332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864 1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f]
	I0328 04:26:47.563848 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.567669 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.571457 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0328 04:26:47.571532 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 04:26:47.612387 3452995 cri.go:89] found id: "8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000"
	I0328 04:26:47.612406 3452995 cri.go:89] found id: "af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22"
	I0328 04:26:47.612411 3452995 cri.go:89] found id: ""
	I0328 04:26:47.612418 3452995 logs.go:276] 2 containers: [8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000 af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22]
	I0328 04:26:47.612474 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.616155 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.621215 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0328 04:26:47.621299 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 04:26:47.663527 3452995 cri.go:89] found id: "42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186"
	I0328 04:26:47.663549 3452995 cri.go:89] found id: "105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6"
	I0328 04:26:47.663554 3452995 cri.go:89] found id: ""
	I0328 04:26:47.663561 3452995 logs.go:276] 2 containers: [42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186 105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6]
	I0328 04:26:47.663615 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.667290 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.670596 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0328 04:26:47.670662 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 04:26:47.711008 3452995 cri.go:89] found id: "99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d"
	I0328 04:26:47.711080 3452995 cri.go:89] found id: "4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508"
	I0328 04:26:47.711104 3452995 cri.go:89] found id: ""
	I0328 04:26:47.711132 3452995 logs.go:276] 2 containers: [99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d 4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508]
	I0328 04:26:47.711236 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.715134 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.718983 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 04:26:47.719060 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 04:26:47.756880 3452995 cri.go:89] found id: "0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0"
	I0328 04:26:47.756906 3452995 cri.go:89] found id: "8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd"
	I0328 04:26:47.756911 3452995 cri.go:89] found id: ""
	I0328 04:26:47.756924 3452995 logs.go:276] 2 containers: [0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0 8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd]
	I0328 04:26:47.756990 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.760730 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.764419 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0328 04:26:47.764493 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 04:26:47.803244 3452995 cri.go:89] found id: "225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b"
	I0328 04:26:47.803268 3452995 cri.go:89] found id: "ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff"
	I0328 04:26:47.803273 3452995 cri.go:89] found id: ""
	I0328 04:26:47.803281 3452995 logs.go:276] 2 containers: [225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff]
	I0328 04:26:47.803342 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.806944 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.810244 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0328 04:26:47.810345 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0328 04:26:47.853320 3452995 cri.go:89] found id: "3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20"
	I0328 04:26:47.853343 3452995 cri.go:89] found id: "b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a"
	I0328 04:26:47.853349 3452995 cri.go:89] found id: ""
	I0328 04:26:47.853357 3452995 logs.go:276] 2 containers: [3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20 b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a]
	I0328 04:26:47.853431 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.857639 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.861144 3452995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 04:26:47.861258 3452995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 04:26:47.903777 3452995 cri.go:89] found id: "ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0"
	I0328 04:26:47.903798 3452995 cri.go:89] found id: ""
	I0328 04:26:47.903806 3452995 logs.go:276] 1 containers: [ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0]
	I0328 04:26:47.903873 3452995 ssh_runner.go:195] Run: which crictl
	I0328 04:26:47.907798 3452995 logs.go:123] Gathering logs for kube-scheduler [105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6] ...
	I0328 04:26:47.907825 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6"
	I0328 04:26:47.946782 3452995 logs.go:123] Gathering logs for kube-controller-manager [8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd] ...
	I0328 04:26:47.946817 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd"
	I0328 04:26:48.012150 3452995 logs.go:123] Gathering logs for kindnet [225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b] ...
	I0328 04:26:48.012190 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b"
	I0328 04:26:48.054567 3452995 logs.go:123] Gathering logs for kubernetes-dashboard [ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0] ...
	I0328 04:26:48.054602 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0"
	I0328 04:26:48.098363 3452995 logs.go:123] Gathering logs for kubelet ...
	I0328 04:26:48.098450 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 04:26:48.155763 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184708     666 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.155994 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184830     666 reflector.go:138] object-"kube-system"/"kindnet-token-sg2gk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-sg2gk" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.156208 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184889     666 reflector.go:138] object-"kube-system"/"coredns-token-ssc6f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-ssc6f" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.156425 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.184963     666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.156691 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185036     666 reflector.go:138] object-"kube-system"/"storage-provisioner-token-vl9zr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-vl9zr" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.156912 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185101     666 reflector.go:138] object-"kube-system"/"kube-proxy-token-2bzlj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2bzlj" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.157124 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185160     666 reflector.go:138] object-"default"/"default-token-pcfrx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pcfrx" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.157342 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:10 old-k8s-version-140381 kubelet[666]: E0328 04:21:10.185207     666 reflector.go:138] object-"kube-system"/"metrics-server-token-zcjpk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-zcjpk" is forbidden: User "system:node:old-k8s-version-140381" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140381' and this object
	W0328 04:26:48.164111 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:12 old-k8s-version-140381 kubelet[666]: E0328 04:21:12.343501     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:48.164527 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:12 old-k8s-version-140381 kubelet[666]: E0328 04:21:12.682344     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.168479 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:27 old-k8s-version-140381 kubelet[666]: E0328 04:21:27.214018     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:48.170568 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:37 old-k8s-version-140381 kubelet[666]: E0328 04:21:37.780553     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.171273 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:38 old-k8s-version-140381 kubelet[666]: E0328 04:21:38.784869     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.171456 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:39 old-k8s-version-140381 kubelet[666]: E0328 04:21:39.202052     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.171784 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:40 old-k8s-version-140381 kubelet[666]: E0328 04:21:40.038234     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.172222 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:42 old-k8s-version-140381 kubelet[666]: E0328 04:21:42.795786     666 pod_workers.go:191] Error syncing pod 12216018-fd85-43f2-8766-9091100b1b60 ("storage-provisioner_kube-system(12216018-fd85-43f2-8766-9091100b1b60)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(12216018-fd85-43f2-8766-9091100b1b60)"
	W0328 04:26:48.175098 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:52 old-k8s-version-140381 kubelet[666]: E0328 04:21:52.216611     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:48.175422 3452995 logs.go:138] Found kubelet problem: Mar 28 04:21:52 old-k8s-version-140381 kubelet[666]: E0328 04:21:52.819852     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.176007 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:00 old-k8s-version-140381 kubelet[666]: E0328 04:22:00.038034     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.176190 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:05 old-k8s-version-140381 kubelet[666]: E0328 04:22:05.202143     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.176775 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:13 old-k8s-version-140381 kubelet[666]: E0328 04:22:13.870530     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.176976 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:17 old-k8s-version-140381 kubelet[666]: E0328 04:22:17.239591     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.177306 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:20 old-k8s-version-140381 kubelet[666]: E0328 04:22:20.037953     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.177487 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:30 old-k8s-version-140381 kubelet[666]: E0328 04:22:30.204576     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.177811 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:31 old-k8s-version-140381 kubelet[666]: E0328 04:22:31.201687     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.178134 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:43 old-k8s-version-140381 kubelet[666]: E0328 04:22:43.201963     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.180561 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:44 old-k8s-version-140381 kubelet[666]: E0328 04:22:44.231952     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:48.181146 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:55 old-k8s-version-140381 kubelet[666]: E0328 04:22:55.985948     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.181332 3452995 logs.go:138] Found kubelet problem: Mar 28 04:22:57 old-k8s-version-140381 kubelet[666]: E0328 04:22:57.202196     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.181655 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:00 old-k8s-version-140381 kubelet[666]: E0328 04:23:00.039239     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.181836 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:08 old-k8s-version-140381 kubelet[666]: E0328 04:23:08.203565     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.182159 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:14 old-k8s-version-140381 kubelet[666]: E0328 04:23:14.202407     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.182339 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:19 old-k8s-version-140381 kubelet[666]: E0328 04:23:19.201996     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.182665 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:25 old-k8s-version-140381 kubelet[666]: E0328 04:23:25.201722     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.182849 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:33 old-k8s-version-140381 kubelet[666]: E0328 04:23:33.201910     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.183173 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:40 old-k8s-version-140381 kubelet[666]: E0328 04:23:40.205750     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.183357 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:45 old-k8s-version-140381 kubelet[666]: E0328 04:23:45.202153     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.183679 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:51 old-k8s-version-140381 kubelet[666]: E0328 04:23:51.202418     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.183861 3452995 logs.go:138] Found kubelet problem: Mar 28 04:23:59 old-k8s-version-140381 kubelet[666]: E0328 04:23:59.202144     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.184187 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:02 old-k8s-version-140381 kubelet[666]: E0328 04:24:02.205580     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.186606 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:13 old-k8s-version-140381 kubelet[666]: E0328 04:24:13.211613     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0328 04:26:48.186931 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:15 old-k8s-version-140381 kubelet[666]: E0328 04:24:15.201704     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.187241 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:26 old-k8s-version-140381 kubelet[666]: E0328 04:24:26.205065     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.187693 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:27 old-k8s-version-140381 kubelet[666]: E0328 04:24:27.190060     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.188017 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:30 old-k8s-version-140381 kubelet[666]: E0328 04:24:30.038061     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.188199 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:41 old-k8s-version-140381 kubelet[666]: E0328 04:24:41.202037     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.188547 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:42 old-k8s-version-140381 kubelet[666]: E0328 04:24:42.202188     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.188733 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:54 old-k8s-version-140381 kubelet[666]: E0328 04:24:54.202037     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.189060 3452995 logs.go:138] Found kubelet problem: Mar 28 04:24:56 old-k8s-version-140381 kubelet[666]: E0328 04:24:56.201866     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.189245 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:08 old-k8s-version-140381 kubelet[666]: E0328 04:25:08.203049     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.189569 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:09 old-k8s-version-140381 kubelet[666]: E0328 04:25:09.202098     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.189750 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:20 old-k8s-version-140381 kubelet[666]: E0328 04:25:20.202174     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.190073 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:24 old-k8s-version-140381 kubelet[666]: E0328 04:25:24.201883     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.190256 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:34 old-k8s-version-140381 kubelet[666]: E0328 04:25:34.207786     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.190579 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:37 old-k8s-version-140381 kubelet[666]: E0328 04:25:37.201754     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.190761 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:45 old-k8s-version-140381 kubelet[666]: E0328 04:25:45.204388     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.191084 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:50 old-k8s-version-140381 kubelet[666]: E0328 04:25:50.205165     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.192224 3452995 logs.go:138] Found kubelet problem: Mar 28 04:25:58 old-k8s-version-140381 kubelet[666]: E0328 04:25:58.202750     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.192568 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:01 old-k8s-version-140381 kubelet[666]: E0328 04:26:01.201757     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.192752 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:12 old-k8s-version-140381 kubelet[666]: E0328 04:26:12.202992     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.193083 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:14 old-k8s-version-140381 kubelet[666]: E0328 04:26:14.202954     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.193265 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:23 old-k8s-version-140381 kubelet[666]: E0328 04:26:23.202145     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.193592 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:26 old-k8s-version-140381 kubelet[666]: E0328 04:26:26.202120     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:48.193773 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:38 old-k8s-version-140381 kubelet[666]: E0328 04:26:38.206755     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:48.194095 3452995 logs.go:138] Found kubelet problem: Mar 28 04:26:40 old-k8s-version-140381 kubelet[666]: E0328 04:26:40.201832     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	I0328 04:26:48.194105 3452995 logs.go:123] Gathering logs for kube-apiserver [5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d] ...
	I0328 04:26:48.194122 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d"
	I0328 04:26:48.273490 3452995 logs.go:123] Gathering logs for etcd [332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864] ...
	I0328 04:26:48.273526 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864"
	I0328 04:26:48.325499 3452995 logs.go:123] Gathering logs for coredns [8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000] ...
	I0328 04:26:48.325533 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000"
	I0328 04:26:48.383276 3452995 logs.go:123] Gathering logs for kube-proxy [4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508] ...
	I0328 04:26:48.383309 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508"
	I0328 04:26:48.422168 3452995 logs.go:123] Gathering logs for storage-provisioner [b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a] ...
	I0328 04:26:48.422201 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a"
	I0328 04:26:48.469344 3452995 logs.go:123] Gathering logs for container status ...
	I0328 04:26:48.469372 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 04:26:48.519326 3452995 logs.go:123] Gathering logs for dmesg ...
	I0328 04:26:48.519364 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 04:26:48.543760 3452995 logs.go:123] Gathering logs for describe nodes ...
	I0328 04:26:48.543788 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 04:26:48.683490 3452995 logs.go:123] Gathering logs for kube-apiserver [1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16] ...
	I0328 04:26:48.683520 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16"
	I0328 04:26:48.740706 3452995 logs.go:123] Gathering logs for etcd [1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f] ...
	I0328 04:26:48.740740 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f"
	I0328 04:26:48.788535 3452995 logs.go:123] Gathering logs for coredns [af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22] ...
	I0328 04:26:48.788610 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22"
	I0328 04:26:48.831299 3452995 logs.go:123] Gathering logs for kube-scheduler [42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186] ...
	I0328 04:26:48.831380 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186"
	I0328 04:26:48.874554 3452995 logs.go:123] Gathering logs for kube-controller-manager [0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0] ...
	I0328 04:26:48.874580 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0"
	I0328 04:26:48.935711 3452995 logs.go:123] Gathering logs for containerd ...
	I0328 04:26:48.935742 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0328 04:26:48.997937 3452995 logs.go:123] Gathering logs for kube-proxy [99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d] ...
	I0328 04:26:48.997987 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d"
	I0328 04:26:49.040653 3452995 logs.go:123] Gathering logs for kindnet [ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff] ...
	I0328 04:26:49.040682 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff"
	I0328 04:26:49.081516 3452995 logs.go:123] Gathering logs for storage-provisioner [3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20] ...
	I0328 04:26:49.081545 3452995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20"
	I0328 04:26:49.124602 3452995 out.go:304] Setting ErrFile to fd 2...
	I0328 04:26:49.124627 3452995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 04:26:49.124677 3452995 out.go:239] X Problems detected in kubelet:
	W0328 04:26:49.124693 3452995 out.go:239]   Mar 28 04:26:14 old-k8s-version-140381 kubelet[666]: E0328 04:26:14.202954     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:49.124701 3452995 out.go:239]   Mar 28 04:26:23 old-k8s-version-140381 kubelet[666]: E0328 04:26:23.202145     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:49.124710 3452995 out.go:239]   Mar 28 04:26:26 old-k8s-version-140381 kubelet[666]: E0328 04:26:26.202120     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	W0328 04:26:49.124725 3452995 out.go:239]   Mar 28 04:26:38 old-k8s-version-140381 kubelet[666]: E0328 04:26:38.206755     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 04:26:49.124732 3452995 out.go:239]   Mar 28 04:26:40 old-k8s-version-140381 kubelet[666]: E0328 04:26:40.201832     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	I0328 04:26:49.124742 3452995 out.go:304] Setting ErrFile to fd 2...
	I0328 04:26:49.124750 3452995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 04:26:50.022032 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:52.500996 3458290 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace has status "Ready":"False"
	I0328 04:26:52.994431 3458290 pod_ready.go:81] duration metric: took 4m0.000038493s for pod "metrics-server-57f55c9bc5-p9ntv" in "kube-system" namespace to be "Ready" ...
	E0328 04:26:52.994461 3458290 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0328 04:26:52.994472 3458290 pod_ready.go:38] duration metric: took 4m10.809072319s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 04:26:52.994488 3458290 api_server.go:52] waiting for apiserver process to appear ...
	I0328 04:26:52.994524 3458290 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0328 04:26:52.994597 3458290 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 04:26:53.038594 3458290 cri.go:89] found id: "1fe6bd010eafae6f1d3c76bb448c9c79b1369e29e1d876695308e0a8ff6bb18a"
	I0328 04:26:53.038617 3458290 cri.go:89] found id: "d4ade15f2033664cfc7e9e2fe0c319dc3136c8a375169b3df234ac6fd61e284e"
	I0328 04:26:53.038622 3458290 cri.go:89] found id: ""
	I0328 04:26:53.038629 3458290 logs.go:276] 2 containers: [1fe6bd010eafae6f1d3c76bb448c9c79b1369e29e1d876695308e0a8ff6bb18a d4ade15f2033664cfc7e9e2fe0c319dc3136c8a375169b3df234ac6fd61e284e]
	I0328 04:26:53.038684 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.042354 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.045937 3458290 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0328 04:26:53.046053 3458290 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 04:26:53.087715 3458290 cri.go:89] found id: "c01be00f08f49fa118bf548181a2a1ef7429d5db50d9780808791b2b09c04054"
	I0328 04:26:53.087740 3458290 cri.go:89] found id: "c3b0cf249a97bdf75a6df5d2ba318c6846a48b18cf15064914aca399c4cde5ac"
	I0328 04:26:53.087760 3458290 cri.go:89] found id: ""
	I0328 04:26:53.087773 3458290 logs.go:276] 2 containers: [c01be00f08f49fa118bf548181a2a1ef7429d5db50d9780808791b2b09c04054 c3b0cf249a97bdf75a6df5d2ba318c6846a48b18cf15064914aca399c4cde5ac]
	I0328 04:26:53.087835 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.091369 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.096091 3458290 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0328 04:26:53.096166 3458290 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 04:26:53.132223 3458290 cri.go:89] found id: "b0c177e1360fadaba487735814e0196e862c817669172ce69a5341729dbc4180"
	I0328 04:26:53.132247 3458290 cri.go:89] found id: "98dfc5e3f3e768f015a7775e0e88f74415eb603f19cf42740d0493aed17a0ca8"
	I0328 04:26:53.132252 3458290 cri.go:89] found id: ""
	I0328 04:26:53.132260 3458290 logs.go:276] 2 containers: [b0c177e1360fadaba487735814e0196e862c817669172ce69a5341729dbc4180 98dfc5e3f3e768f015a7775e0e88f74415eb603f19cf42740d0493aed17a0ca8]
	I0328 04:26:53.132315 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.135876 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.139320 3458290 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0328 04:26:53.139398 3458290 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 04:26:53.180700 3458290 cri.go:89] found id: "96a5e6ad89e264776bccb89bdeddf1745da5e9423cb2e87c9332aa4850ccc090"
	I0328 04:26:53.180725 3458290 cri.go:89] found id: "3f0e618e442c98e8c758dcf681167c817b1037dfa4bfa4215739878296af4a74"
	I0328 04:26:53.180730 3458290 cri.go:89] found id: ""
	I0328 04:26:53.180738 3458290 logs.go:276] 2 containers: [96a5e6ad89e264776bccb89bdeddf1745da5e9423cb2e87c9332aa4850ccc090 3f0e618e442c98e8c758dcf681167c817b1037dfa4bfa4215739878296af4a74]
	I0328 04:26:53.180813 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.184434 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.187773 3458290 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0328 04:26:53.187847 3458290 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 04:26:53.226155 3458290 cri.go:89] found id: "e21569959944b431c9f69b43c5e2e88f020a6e61be25e6c493df242ab133bc35"
	I0328 04:26:53.226178 3458290 cri.go:89] found id: "a95108a9241a2df26dfe727dff570eeaac3d2a4f6b109a27c67dcc8e48682265"
	I0328 04:26:53.226182 3458290 cri.go:89] found id: ""
	I0328 04:26:53.226190 3458290 logs.go:276] 2 containers: [e21569959944b431c9f69b43c5e2e88f020a6e61be25e6c493df242ab133bc35 a95108a9241a2df26dfe727dff570eeaac3d2a4f6b109a27c67dcc8e48682265]
	I0328 04:26:53.226270 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.230348 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.234147 3458290 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 04:26:53.234229 3458290 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 04:26:53.276846 3458290 cri.go:89] found id: "839681e516e36ac09bc3799ce20ad12e143c0b5fc803c13f43cd7a9baadc93bd"
	I0328 04:26:53.276883 3458290 cri.go:89] found id: "6fdf6f988a58a053f1a34e4a124be068272f9ae033291175c1769e6593c1f24e"
	I0328 04:26:53.276889 3458290 cri.go:89] found id: ""
	I0328 04:26:53.276897 3458290 logs.go:276] 2 containers: [839681e516e36ac09bc3799ce20ad12e143c0b5fc803c13f43cd7a9baadc93bd 6fdf6f988a58a053f1a34e4a124be068272f9ae033291175c1769e6593c1f24e]
	I0328 04:26:53.276981 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.280649 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.284223 3458290 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0328 04:26:53.284302 3458290 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 04:26:53.331137 3458290 cri.go:89] found id: "ec28836f3b8091e9f526970931266a83e2dcac44224c7d94cf8b3e993f43676c"
	I0328 04:26:53.331214 3458290 cri.go:89] found id: "392c0c82afe69c32edd37ac7eb093f1c137f825232f1530d0e64d2b8380c11d6"
	I0328 04:26:53.331234 3458290 cri.go:89] found id: ""
	I0328 04:26:53.331257 3458290 logs.go:276] 2 containers: [ec28836f3b8091e9f526970931266a83e2dcac44224c7d94cf8b3e993f43676c 392c0c82afe69c32edd37ac7eb093f1c137f825232f1530d0e64d2b8380c11d6]
	I0328 04:26:53.331339 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.335240 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.338738 3458290 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0328 04:26:53.338833 3458290 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0328 04:26:53.385979 3458290 cri.go:89] found id: "0053dd8111d74e33627c664337dc2d961f294d522aaf5a89b20b16733ca351a5"
	I0328 04:26:53.386002 3458290 cri.go:89] found id: "4204b683b4eae0bb0e009e7de3a03a0e9e90336b0f9433d73619f73c98beae2b"
	I0328 04:26:53.386007 3458290 cri.go:89] found id: ""
	I0328 04:26:53.386014 3458290 logs.go:276] 2 containers: [0053dd8111d74e33627c664337dc2d961f294d522aaf5a89b20b16733ca351a5 4204b683b4eae0bb0e009e7de3a03a0e9e90336b0f9433d73619f73c98beae2b]
	I0328 04:26:53.386069 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.395495 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.399347 3458290 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 04:26:53.399439 3458290 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 04:26:53.446353 3458290 cri.go:89] found id: "419bbf9a533619579f3e63e06755ee87485fad59422a7c515cfcd7d8b6c090f1"
	I0328 04:26:53.446378 3458290 cri.go:89] found id: ""
	I0328 04:26:53.446387 3458290 logs.go:276] 1 containers: [419bbf9a533619579f3e63e06755ee87485fad59422a7c515cfcd7d8b6c090f1]
	I0328 04:26:53.446477 3458290 ssh_runner.go:195] Run: which crictl
	I0328 04:26:53.449987 3458290 logs.go:123] Gathering logs for containerd ...
	I0328 04:26:53.450019 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0328 04:26:53.509989 3458290 logs.go:123] Gathering logs for dmesg ...
	I0328 04:26:53.510026 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 04:26:53.539499 3458290 logs.go:123] Gathering logs for etcd [c01be00f08f49fa118bf548181a2a1ef7429d5db50d9780808791b2b09c04054] ...
	I0328 04:26:53.539535 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c01be00f08f49fa118bf548181a2a1ef7429d5db50d9780808791b2b09c04054"
	I0328 04:26:53.590160 3458290 logs.go:123] Gathering logs for coredns [98dfc5e3f3e768f015a7775e0e88f74415eb603f19cf42740d0493aed17a0ca8] ...
	I0328 04:26:53.590190 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98dfc5e3f3e768f015a7775e0e88f74415eb603f19cf42740d0493aed17a0ca8"
	I0328 04:26:53.633196 3458290 logs.go:123] Gathering logs for kube-controller-manager [839681e516e36ac09bc3799ce20ad12e143c0b5fc803c13f43cd7a9baadc93bd] ...
	I0328 04:26:53.633223 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 839681e516e36ac09bc3799ce20ad12e143c0b5fc803c13f43cd7a9baadc93bd"
	I0328 04:26:53.694925 3458290 logs.go:123] Gathering logs for storage-provisioner [4204b683b4eae0bb0e009e7de3a03a0e9e90336b0f9433d73619f73c98beae2b] ...
	I0328 04:26:53.694960 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4204b683b4eae0bb0e009e7de3a03a0e9e90336b0f9433d73619f73c98beae2b"
	I0328 04:26:53.732421 3458290 logs.go:123] Gathering logs for kubernetes-dashboard [419bbf9a533619579f3e63e06755ee87485fad59422a7c515cfcd7d8b6c090f1] ...
	I0328 04:26:53.732448 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 419bbf9a533619579f3e63e06755ee87485fad59422a7c515cfcd7d8b6c090f1"
	I0328 04:26:53.776079 3458290 logs.go:123] Gathering logs for kubelet ...
	I0328 04:26:53.776108 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 04:26:53.857078 3458290 logs.go:123] Gathering logs for kube-apiserver [1fe6bd010eafae6f1d3c76bb448c9c79b1369e29e1d876695308e0a8ff6bb18a] ...
	I0328 04:26:53.857119 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fe6bd010eafae6f1d3c76bb448c9c79b1369e29e1d876695308e0a8ff6bb18a"
	I0328 04:26:53.915155 3458290 logs.go:123] Gathering logs for kube-apiserver [d4ade15f2033664cfc7e9e2fe0c319dc3136c8a375169b3df234ac6fd61e284e] ...
	I0328 04:26:53.915194 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ade15f2033664cfc7e9e2fe0c319dc3136c8a375169b3df234ac6fd61e284e"
	I0328 04:26:53.969204 3458290 logs.go:123] Gathering logs for etcd [c3b0cf249a97bdf75a6df5d2ba318c6846a48b18cf15064914aca399c4cde5ac] ...
	I0328 04:26:53.969235 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3b0cf249a97bdf75a6df5d2ba318c6846a48b18cf15064914aca399c4cde5ac"
	I0328 04:26:54.020966 3458290 logs.go:123] Gathering logs for kube-proxy [e21569959944b431c9f69b43c5e2e88f020a6e61be25e6c493df242ab133bc35] ...
	I0328 04:26:54.021003 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e21569959944b431c9f69b43c5e2e88f020a6e61be25e6c493df242ab133bc35"
	I0328 04:26:54.061436 3458290 logs.go:123] Gathering logs for describe nodes ...
	I0328 04:26:54.061465 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 04:26:54.262796 3458290 logs.go:123] Gathering logs for coredns [b0c177e1360fadaba487735814e0196e862c817669172ce69a5341729dbc4180] ...
	I0328 04:26:54.262824 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0c177e1360fadaba487735814e0196e862c817669172ce69a5341729dbc4180"
	I0328 04:26:54.305879 3458290 logs.go:123] Gathering logs for storage-provisioner [0053dd8111d74e33627c664337dc2d961f294d522aaf5a89b20b16733ca351a5] ...
	I0328 04:26:54.305912 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0053dd8111d74e33627c664337dc2d961f294d522aaf5a89b20b16733ca351a5"
	I0328 04:26:54.344475 3458290 logs.go:123] Gathering logs for kindnet [392c0c82afe69c32edd37ac7eb093f1c137f825232f1530d0e64d2b8380c11d6] ...
	I0328 04:26:54.344507 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 392c0c82afe69c32edd37ac7eb093f1c137f825232f1530d0e64d2b8380c11d6"
	I0328 04:26:54.386394 3458290 logs.go:123] Gathering logs for container status ...
	I0328 04:26:54.386421 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 04:26:54.448829 3458290 logs.go:123] Gathering logs for kube-scheduler [96a5e6ad89e264776bccb89bdeddf1745da5e9423cb2e87c9332aa4850ccc090] ...
	I0328 04:26:54.448983 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96a5e6ad89e264776bccb89bdeddf1745da5e9423cb2e87c9332aa4850ccc090"
	I0328 04:26:54.487766 3458290 logs.go:123] Gathering logs for kube-scheduler [3f0e618e442c98e8c758dcf681167c817b1037dfa4bfa4215739878296af4a74] ...
	I0328 04:26:54.487795 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0e618e442c98e8c758dcf681167c817b1037dfa4bfa4215739878296af4a74"
	I0328 04:26:54.551694 3458290 logs.go:123] Gathering logs for kube-proxy [a95108a9241a2df26dfe727dff570eeaac3d2a4f6b109a27c67dcc8e48682265] ...
	I0328 04:26:54.551724 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a95108a9241a2df26dfe727dff570eeaac3d2a4f6b109a27c67dcc8e48682265"
	I0328 04:26:54.600237 3458290 logs.go:123] Gathering logs for kube-controller-manager [6fdf6f988a58a053f1a34e4a124be068272f9ae033291175c1769e6593c1f24e] ...
	I0328 04:26:54.600264 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fdf6f988a58a053f1a34e4a124be068272f9ae033291175c1769e6593c1f24e"
	I0328 04:26:54.658665 3458290 logs.go:123] Gathering logs for kindnet [ec28836f3b8091e9f526970931266a83e2dcac44224c7d94cf8b3e993f43676c] ...
	I0328 04:26:54.658701 3458290 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec28836f3b8091e9f526970931266a83e2dcac44224c7d94cf8b3e993f43676c"
	I0328 04:26:59.125967 3452995 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0328 04:26:59.138702 3452995 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0328 04:26:59.141044 3452995 out.go:177] 
	W0328 04:26:59.143335 3452995 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0328 04:26:59.143387 3452995 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0328 04:26:59.143408 3452995 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0328 04:26:59.143413 3452995 out.go:239] * 
	W0328 04:26:59.144298 3452995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 04:26:59.146742 3452995 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	20acd12f7e916       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   bf9ab8406160b       dashboard-metrics-scraper-8d5bb5db8-rgkgb
	3df02b7fe0a2d       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   6a61b7b103dc8       storage-provisioner
	ef51c0050c256       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   60941b78f96f7       kubernetes-dashboard-cd95d586-2kmz9
	225657a7de15a       4740c1948d3fc       5 minutes ago       Running             kindnet-cni                 1                   511ac5ebc9059       kindnet-dprgv
	8c7fae1bae21b       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   0d658f389c023       coredns-74ff55c5b-cbbwd
	99a3fea888bd3       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   2a15f6bad0774       kube-proxy-qp768
	b73430246eaca       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   6a61b7b103dc8       storage-provisioner
	5123a3a9dea52       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   628e96484d922       busybox
	5057ffa862b78       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   f6a165fae9435       kube-apiserver-old-k8s-version-140381
	0c1e536ea2d10       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   c96dea333b16a       kube-controller-manager-old-k8s-version-140381
	332079c327638       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   f1ce2adb5ac69       etcd-old-k8s-version-140381
	42fb577a72d6f       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   cd66a3d465d9b       kube-scheduler-old-k8s-version-140381
	60c4b23b54b67       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   5222aaaaec7fb       busybox
	af191444b0ff2       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   1b2cedbdd2e03       coredns-74ff55c5b-cbbwd
	ce8f84a31a490       4740c1948d3fc       8 minutes ago       Exited              kindnet-cni                 0                   ca1fb1955fa01       kindnet-dprgv
	4847eba79f983       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   7210dabf35534       kube-proxy-qp768
	1209f8602d3c9       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   cee286ad72729       etcd-old-k8s-version-140381
	105d61347ffff       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   b15397f44d0c3       kube-scheduler-old-k8s-version-140381
	1def164f172fa       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   a7828c9889f51       kube-apiserver-old-k8s-version-140381
	8ab125e030e14       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   0a983d15d4a6e       kube-controller-manager-old-k8s-version-140381
	
	
	==> containerd <==
	Mar 28 04:22:44 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:22:44.228583430Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Mar 28 04:22:44 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:22:44.230545739Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Mar 28 04:22:55 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:22:55.205773218Z" level=info msg="CreateContainer within sandbox \"bf9ab8406160b890260a9ea7e758f79b1e032cee2a5de4129158e22fd0853b0c\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,}"
	Mar 28 04:22:55 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:22:55.234398292Z" level=info msg="CreateContainer within sandbox \"bf9ab8406160b890260a9ea7e758f79b1e032cee2a5de4129158e22fd0853b0c\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"7fe16e36818733e6791c162b44e1efdfade30554409ee517f42c4d7fd8a6279f\""
	Mar 28 04:22:55 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:22:55.235244497Z" level=info msg="StartContainer for \"7fe16e36818733e6791c162b44e1efdfade30554409ee517f42c4d7fd8a6279f\""
	Mar 28 04:22:55 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:22:55.357111589Z" level=info msg="StartContainer for \"7fe16e36818733e6791c162b44e1efdfade30554409ee517f42c4d7fd8a6279f\" returns successfully"
	Mar 28 04:22:55 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:22:55.431435072Z" level=info msg="shim disconnected" id=7fe16e36818733e6791c162b44e1efdfade30554409ee517f42c4d7fd8a6279f
	Mar 28 04:22:55 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:22:55.431505733Z" level=warning msg="cleaning up after shim disconnected" id=7fe16e36818733e6791c162b44e1efdfade30554409ee517f42c4d7fd8a6279f namespace=k8s.io
	Mar 28 04:22:55 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:22:55.431521617Z" level=info msg="cleaning up dead shim"
	Mar 28 04:22:55 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:22:55.441924941Z" level=warning msg="cleanup warnings time=\"2024-03-28T04:22:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2919 runtime=io.containerd.runc.v2\n"
	Mar 28 04:22:55 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:22:55.993445676Z" level=info msg="RemoveContainer for \"9d77855a06ae362d7403815f929908ef538cf6a7533167a9aadd8c2bedd9c0f1\""
	Mar 28 04:22:56 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:22:56.023277937Z" level=info msg="RemoveContainer for \"9d77855a06ae362d7403815f929908ef538cf6a7533167a9aadd8c2bedd9c0f1\" returns successfully"
	Mar 28 04:24:13 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:24:13.202533414Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 04:24:13 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:24:13.208917304Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Mar 28 04:24:13 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:24:13.211121337Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Mar 28 04:24:26 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:24:26.210393852Z" level=info msg="CreateContainer within sandbox \"bf9ab8406160b890260a9ea7e758f79b1e032cee2a5de4129158e22fd0853b0c\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Mar 28 04:24:26 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:24:26.230805745Z" level=info msg="CreateContainer within sandbox \"bf9ab8406160b890260a9ea7e758f79b1e032cee2a5de4129158e22fd0853b0c\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"20acd12f7e916472b1e8da4e15dccc6309b7bf972b36c544acdbc4f46a36e1d1\""
	Mar 28 04:24:26 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:24:26.231579344Z" level=info msg="StartContainer for \"20acd12f7e916472b1e8da4e15dccc6309b7bf972b36c544acdbc4f46a36e1d1\""
	Mar 28 04:24:26 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:24:26.294393112Z" level=info msg="StartContainer for \"20acd12f7e916472b1e8da4e15dccc6309b7bf972b36c544acdbc4f46a36e1d1\" returns successfully"
	Mar 28 04:24:26 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:24:26.325899222Z" level=info msg="shim disconnected" id=20acd12f7e916472b1e8da4e15dccc6309b7bf972b36c544acdbc4f46a36e1d1
	Mar 28 04:24:26 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:24:26.325960144Z" level=warning msg="cleaning up after shim disconnected" id=20acd12f7e916472b1e8da4e15dccc6309b7bf972b36c544acdbc4f46a36e1d1 namespace=k8s.io
	Mar 28 04:24:26 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:24:26.325972090Z" level=info msg="cleaning up dead shim"
	Mar 28 04:24:26 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:24:26.333896860Z" level=warning msg="cleanup warnings time=\"2024-03-28T04:24:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3177 runtime=io.containerd.runc.v2\n"
	Mar 28 04:24:27 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:24:27.191503638Z" level=info msg="RemoveContainer for \"7fe16e36818733e6791c162b44e1efdfade30554409ee517f42c4d7fd8a6279f\""
	Mar 28 04:24:27 old-k8s-version-140381 containerd[571]: time="2024-03-28T04:24:27.198743324Z" level=info msg="RemoveContainer for \"7fe16e36818733e6791c162b44e1efdfade30554409ee517f42c4d7fd8a6279f\" returns successfully"
	
	
	==> coredns [8c7fae1bae21b33a2ad7a43f19ba05d86f3ac42ed499e97e425e19cda95aa000] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:37655 - 61817 "HINFO IN 5943199145765070538.668263723201945190. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01205959s
	
	
	==> coredns [af191444b0ff2073db32d069a9ec8e88b581b168590153f8a45ed0023d37cb22] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:41910 - 54408 "HINFO IN 4432571250754268193.2696807896660431174. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022513531s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-140381
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-140381
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=old-k8s-version-140381
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T04_18_29_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 04:18:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-140381
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 04:26:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 04:22:00 +0000   Thu, 28 Mar 2024 04:18:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 04:22:00 +0000   Thu, 28 Mar 2024 04:18:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 04:22:00 +0000   Thu, 28 Mar 2024 04:18:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 04:22:00 +0000   Thu, 28 Mar 2024 04:18:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-140381
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022568Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022568Ki
	  pods:               110
	System Info:
	  Machine ID:                 5d23878b350443b295e38acec79956c3
	  System UUID:                41435436-c933-45bb-b479-0522a8a337ff
	  Boot ID:                    6d3ffb57-9092-48f6-a12c-685c1918590f
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  kube-system                 coredns-74ff55c5b-cbbwd                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m17s
	  kube-system                 etcd-old-k8s-version-140381                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m24s
	  kube-system                 kindnet-dprgv                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m17s
	  kube-system                 kube-apiserver-old-k8s-version-140381             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 kube-controller-manager-old-k8s-version-140381    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 kube-proxy-qp768                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	  kube-system                 kube-scheduler-old-k8s-version-140381             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 metrics-server-9975d5f86-cccxt                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m32s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-rgkgb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-2kmz9               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 8m43s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m43s (x5 over 8m43s)  kubelet     Node old-k8s-version-140381 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m43s (x4 over 8m43s)  kubelet     Node old-k8s-version-140381 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m43s (x4 over 8m43s)  kubelet     Node old-k8s-version-140381 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m43s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 8m24s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m24s                  kubelet     Node old-k8s-version-140381 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m24s                  kubelet     Node old-k8s-version-140381 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m24s                  kubelet     Node old-k8s-version-140381 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m24s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m17s                  kubelet     Node old-k8s-version-140381 status is now: NodeReady
	  Normal  Starting                 8m16s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m4s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m3s (x8 over 6m3s)    kubelet     Node old-k8s-version-140381 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s (x8 over 6m3s)    kubelet     Node old-k8s-version-140381 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s (x7 over 6m3s)    kubelet     Node old-k8s-version-140381 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m3s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m49s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001033] FS-Cache: O-key=[8] '64e2c90000000000'
	[  +0.000702] FS-Cache: N-cookie c=000000ae [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=000000007dd5da0e{9p.inode} n=0000000078daed98
	[  +0.001090] FS-Cache: N-key=[8] '64e2c90000000000'
	[  +0.002505] FS-Cache: Duplicate cookie detected
	[  +0.000689] FS-Cache: O-cookie c=000000a8 [p=000000a5 fl=226 nc=0 na=1]
	[  +0.001028] FS-Cache: O-cookie d=000000007dd5da0e{9p.inode} n=00000000b2051977
	[  +0.001032] FS-Cache: O-key=[8] '64e2c90000000000'
	[  +0.000682] FS-Cache: N-cookie c=000000af [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000902] FS-Cache: N-cookie d=000000007dd5da0e{9p.inode} n=00000000c470c782
	[  +0.001039] FS-Cache: N-key=[8] '64e2c90000000000'
	[  +2.748949] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=000000a6 [p=000000a5 fl=226 nc=0 na=1]
	[  +0.000996] FS-Cache: O-cookie d=000000007dd5da0e{9p.inode} n=00000000c2463226
	[  +0.001012] FS-Cache: O-key=[8] '63e2c90000000000'
	[  +0.000769] FS-Cache: N-cookie c=000000b1 [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000924] FS-Cache: N-cookie d=000000007dd5da0e{9p.inode} n=0000000078daed98
	[  +0.001101] FS-Cache: N-key=[8] '63e2c90000000000'
	[  +0.354567] FS-Cache: Duplicate cookie detected
	[  +0.000719] FS-Cache: O-cookie c=000000ab [p=000000a5 fl=226 nc=0 na=1]
	[  +0.000957] FS-Cache: O-cookie d=000000007dd5da0e{9p.inode} n=0000000032bd8fff
	[  +0.001034] FS-Cache: O-key=[8] '69e2c90000000000'
	[  +0.000792] FS-Cache: N-cookie c=000000b2 [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=000000007dd5da0e{9p.inode} n=0000000030ee95e9
	[  +0.001125] FS-Cache: N-key=[8] '69e2c90000000000'
	
	
	==> etcd [1209f8602d3c918af6eaebd317ba112887816e7df909b97f08502e714e49af4f] <==
	2024-03-28 04:18:18.971682 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/03/28 04:18:19 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2024/03/28 04:18:19 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/03/28 04:18:19 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/03/28 04:18:19 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/03/28 04:18:19 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-03-28 04:18:19.651308 I | etcdserver: published {Name:old-k8s-version-140381 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-03-28 04:18:19.651557 I | embed: ready to serve client requests
	2024-03-28 04:18:19.662791 I | embed: ready to serve client requests
	2024-03-28 04:18:19.667113 I | etcdserver: setting up the initial cluster version to 3.4
	2024-03-28 04:18:19.669259 I | embed: serving client requests on 192.168.85.2:2379
	2024-03-28 04:18:19.670672 I | embed: serving client requests on 127.0.0.1:2379
	2024-03-28 04:18:19.672427 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-03-28 04:18:19.672593 I | etcdserver/api: enabled capabilities for version 3.4
	2024-03-28 04:18:43.762782 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:18:50.956527 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:19:00.956462 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:19:10.956611 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:19:20.956685 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:19:30.956582 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:19:40.956616 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:19:50.956408 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:20:00.957307 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:20:10.956469 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:20:20.956710 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [332079c3276387cdd604f79bfcd8a955656867678f670bc1d0baf9d981215864] <==
	2024-03-28 04:22:52.020514 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:23:02.020647 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:23:12.019836 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:23:22.020159 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:23:32.019576 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:23:42.021965 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:23:52.019909 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:24:02.019936 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:24:12.019688 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:24:22.019506 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:24:32.023030 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:24:42.019614 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:24:52.021320 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:25:02.020574 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:25:12.028410 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:25:22.020626 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:25:32.020754 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:25:42.019775 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:25:52.019883 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:26:02.021073 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:26:12.019834 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:26:22.020291 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:26:32.019676 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:26:42.020888 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 04:26:52.020260 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 04:27:01 up 12:09,  0 users,  load average: 1.38, 1.67, 2.21
	Linux old-k8s-version-140381 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [225657a7de15afcc3501c5b9449c8ac4b99551a0a355ee208dd01132335b422b] <==
	I0328 04:24:55.315044       1 main.go:227] handling current node
	I0328 04:25:05.334795       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:25:05.334983       1 main.go:227] handling current node
	I0328 04:25:15.338711       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:25:15.338740       1 main.go:227] handling current node
	I0328 04:25:25.363401       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:25:25.363441       1 main.go:227] handling current node
	I0328 04:25:35.378816       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:25:35.378928       1 main.go:227] handling current node
	I0328 04:25:45.397762       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:25:45.397794       1 main.go:227] handling current node
	I0328 04:25:55.416032       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:25:55.416072       1 main.go:227] handling current node
	I0328 04:26:05.426529       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:26:05.426627       1 main.go:227] handling current node
	I0328 04:26:15.439329       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:26:15.439361       1 main.go:227] handling current node
	I0328 04:26:25.453732       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:26:25.453765       1 main.go:227] handling current node
	I0328 04:26:35.465238       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:26:35.465277       1 main.go:227] handling current node
	I0328 04:26:45.477369       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:26:45.477401       1 main.go:227] handling current node
	I0328 04:26:55.489251       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:26:55.489280       1 main.go:227] handling current node
	
	
	==> kindnet [ce8f84a31a49022941b6fa9dc50b13c5dc1c73ab8520a858910f0e996bcd96ff] <==
	I0328 04:18:45.563795       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0328 04:18:45.563862       1 main.go:107] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0328 04:18:45.563970       1 main.go:116] setting mtu 1500 for CNI 
	I0328 04:18:45.563980       1 main.go:146] kindnetd IP family: "ipv4"
	I0328 04:18:45.563993       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0328 04:19:15.830873       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0328 04:19:15.846861       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:19:15.846895       1 main.go:227] handling current node
	I0328 04:19:25.869144       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:19:25.869370       1 main.go:227] handling current node
	I0328 04:19:35.892918       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:19:35.892946       1 main.go:227] handling current node
	I0328 04:19:45.905175       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:19:45.905215       1 main.go:227] handling current node
	I0328 04:19:55.928274       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:19:55.928304       1 main.go:227] handling current node
	I0328 04:20:05.951248       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:20:05.951425       1 main.go:227] handling current node
	I0328 04:20:15.955552       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:20:15.955583       1 main.go:227] handling current node
	I0328 04:20:25.981044       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0328 04:20:25.981074       1 main.go:227] handling current node
	
	
	==> kube-apiserver [1def164f172faa87e3651b9511067991ae4ac17773d8f5919b2daf5f0d5afb16] <==
	I0328 04:18:26.420665       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0328 04:18:26.420719       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0328 04:18:26.433013       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0328 04:18:26.436954       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0328 04:18:26.436978       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0328 04:18:26.923454       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 04:18:26.972567       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0328 04:18:27.121124       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0328 04:18:27.122393       1 controller.go:606] quota admission added evaluator for: endpoints
	I0328 04:18:27.129512       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0328 04:18:28.033420       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0328 04:18:28.690380       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0328 04:18:28.746508       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0328 04:18:37.168148       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 04:18:44.084877       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0328 04:18:44.228268       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0328 04:19:04.851848       1 client.go:360] parsed scheme: "passthrough"
	I0328 04:19:04.851892       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 04:19:04.851951       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0328 04:19:43.668949       1 client.go:360] parsed scheme: "passthrough"
	I0328 04:19:43.668991       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 04:19:43.669000       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0328 04:20:14.637313       1 client.go:360] parsed scheme: "passthrough"
	I0328 04:20:14.637355       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 04:20:14.637364       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [5057ffa862b7844d6d98ee01618e8edb3a2ce71c6e96b06978d0b00af3cdbf1d] <==
	I0328 04:23:23.596707       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 04:23:23.596715       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0328 04:24:01.935860       1 client.go:360] parsed scheme: "passthrough"
	I0328 04:24:01.935912       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 04:24:01.935921       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0328 04:24:12.922944       1 handler_proxy.go:102] no RequestInfo found in the context
	E0328 04:24:12.923063       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 04:24:12.923119       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0328 04:24:35.794723       1 client.go:360] parsed scheme: "passthrough"
	I0328 04:24:35.794775       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 04:24:35.794784       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0328 04:25:13.782998       1 client.go:360] parsed scheme: "passthrough"
	I0328 04:25:13.783045       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 04:25:13.783054       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0328 04:25:51.511523       1 client.go:360] parsed scheme: "passthrough"
	I0328 04:25:51.511567       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 04:25:51.511576       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0328 04:26:11.170469       1 handler_proxy.go:102] no RequestInfo found in the context
	E0328 04:26:11.170705       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 04:26:11.170727       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0328 04:26:27.379427       1 client.go:360] parsed scheme: "passthrough"
	I0328 04:26:27.379474       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 04:26:27.379624       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [0c1e536ea2d10ca946cd7d682a3d6733ff33514e203f184f92d9697e13f92fb0] <==
	W0328 04:22:34.024741       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 04:23:00.097328       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 04:23:05.675451       1 request.go:655] Throttling request took 1.048499634s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s
	W0328 04:23:06.526993       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 04:23:30.598213       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 04:23:38.177363       1 request.go:655] Throttling request took 1.04743669s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0328 04:23:39.028851       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 04:24:01.100000       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 04:24:10.679300       1 request.go:655] Throttling request took 1.047271814s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0328 04:24:11.530639       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 04:24:31.648651       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 04:24:43.181079       1 request.go:655] Throttling request took 1.047437464s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0328 04:24:44.033833       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 04:25:02.150801       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 04:25:15.684428       1 request.go:655] Throttling request took 1.048511062s, request: GET:https://192.168.85.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
	W0328 04:25:16.535720       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 04:25:32.674303       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 04:25:48.186143       1 request.go:655] Throttling request took 1.048413007s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0328 04:25:49.038462       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 04:26:03.176087       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 04:26:20.689036       1 request.go:655] Throttling request took 1.048275037s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0328 04:26:21.540685       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 04:26:33.677872       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 04:26:53.191264       1 request.go:655] Throttling request took 1.048301378s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0328 04:26:54.042996       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [8ab125e030e14ca0de841ee6b8391f240e752439b6542ea41870cdf22ce4b9cd] <==
	I0328 04:18:44.161138       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-140381" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 04:18:44.163959       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0328 04:18:44.164843       1 shared_informer.go:247] Caches are synced for endpoint 
	I0328 04:18:44.166161       1 shared_informer.go:247] Caches are synced for attach detach 
	I0328 04:18:44.187556       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0328 04:18:44.213327       1 shared_informer.go:247] Caches are synced for deployment 
	I0328 04:18:44.214995       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0328 04:18:44.215386       1 shared_informer.go:247] Caches are synced for disruption 
	I0328 04:18:44.215504       1 disruption.go:339] Sending events to api server.
	I0328 04:18:44.226004       1 shared_informer.go:247] Caches are synced for resource quota 
	I0328 04:18:44.233340       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-dprgv"
	I0328 04:18:44.265417       1 shared_informer.go:247] Caches are synced for resource quota 
	I0328 04:18:44.414141       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0328 04:18:44.420603       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5gwb9"
	I0328 04:18:44.423879       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0328 04:18:44.533290       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-cbbwd"
	E0328 04:18:44.540508       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"e7710558-2e2f-46c7-90c1-4d954cc99790", ResourceVersion:"392", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63847196308, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000cf11a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000cf1200)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000cf1260), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000cf12c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000cf1320), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001646dc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000cf1380), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000cf1440), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000cf1500)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400130d680), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40012b39e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40001707e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400011ef18)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40012b3a38)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	E0328 04:18:44.645364       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"9ae5c4de-cc9f-48d0-a9dc-390321db9272", ResourceVersion:"278", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63847196309, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240202-8f1494ea\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001c75900), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001c75920)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001c75940), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001c75960), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001c75980), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001c759a0), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240202-8f1494ea", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001c759c0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001c75a00)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001c66900), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001c77128), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40001e22a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40006416c8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001c77170)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0328 04:18:44.701222       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0328 04:18:44.701253       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0328 04:18:44.724073       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0328 04:18:45.939196       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0328 04:18:45.952543       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-5gwb9"
	I0328 04:18:49.048843       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0328 04:20:28.863384       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-proxy [4847eba79f983a1125c5505ccd7e899cf74aaca8a7ee689d6facb7f8e647d508] <==
	I0328 04:18:45.481553       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0328 04:18:45.481751       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0328 04:18:45.526293       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0328 04:18:45.526384       1 server_others.go:185] Using iptables Proxier.
	I0328 04:18:45.526600       1 server.go:650] Version: v1.20.0
	I0328 04:18:45.527053       1 config.go:315] Starting service config controller
	I0328 04:18:45.527067       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0328 04:18:45.527149       1 config.go:224] Starting endpoint slice config controller
	I0328 04:18:45.527155       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0328 04:18:45.627199       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0328 04:18:45.627280       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [99a3fea888bd3bb88e88d7358c76dfdc3a5092529c06f4ac01044c88ed7a000d] <==
	I0328 04:21:12.770720       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0328 04:21:12.770797       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0328 04:21:12.787770       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0328 04:21:12.787874       1 server_others.go:185] Using iptables Proxier.
	I0328 04:21:12.788303       1 server.go:650] Version: v1.20.0
	I0328 04:21:12.789161       1 config.go:315] Starting service config controller
	I0328 04:21:12.789329       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0328 04:21:12.789912       1 config.go:224] Starting endpoint slice config controller
	I0328 04:21:12.790835       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0328 04:21:12.891509       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0328 04:21:12.891761       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [105d61347ffff980321dc496ef09f17ca4c7bc45864241249373014396abaed6] <==
	I0328 04:18:20.800716       1 serving.go:331] Generated self-signed cert in-memory
	W0328 04:18:25.606297       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0328 04:18:25.606569       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 04:18:25.606699       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0328 04:18:25.606799       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 04:18:25.676053       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0328 04:18:25.677154       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 04:18:25.681346       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 04:18:25.681501       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0328 04:18:25.731255       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 04:18:25.735477       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0328 04:18:25.735773       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0328 04:18:25.736005       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 04:18:25.736206       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0328 04:18:25.736594       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 04:18:25.736761       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 04:18:25.736926       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 04:18:25.736985       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 04:18:25.737046       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 04:18:25.737093       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0328 04:18:25.737146       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 04:18:26.568550       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 04:18:26.568635       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 04:18:26.758997       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 04:18:27.081635       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [42fb577a72d6ff3b85f126114a28e8e647a6ae3efcf1f81a4617f8fb9d502186] <==
	I0328 04:21:02.317220       1 serving.go:331] Generated self-signed cert in-memory
	W0328 04:21:09.937422       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0328 04:21:09.937454       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 04:21:09.937466       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0328 04:21:09.937471       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 04:21:10.254432       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0328 04:21:10.254519       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 04:21:10.254526       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 04:21:10.254541       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0328 04:21:10.354782       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Mar 28 04:25:09 old-k8s-version-140381 kubelet[666]: E0328 04:25:09.202098     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	Mar 28 04:25:20 old-k8s-version-140381 kubelet[666]: E0328 04:25:20.202174     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 04:25:24 old-k8s-version-140381 kubelet[666]: I0328 04:25:24.201490     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 20acd12f7e916472b1e8da4e15dccc6309b7bf972b36c544acdbc4f46a36e1d1
	Mar 28 04:25:24 old-k8s-version-140381 kubelet[666]: E0328 04:25:24.201883     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	Mar 28 04:25:34 old-k8s-version-140381 kubelet[666]: E0328 04:25:34.207786     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 04:25:37 old-k8s-version-140381 kubelet[666]: I0328 04:25:37.201422     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 20acd12f7e916472b1e8da4e15dccc6309b7bf972b36c544acdbc4f46a36e1d1
	Mar 28 04:25:37 old-k8s-version-140381 kubelet[666]: E0328 04:25:37.201754     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	Mar 28 04:25:45 old-k8s-version-140381 kubelet[666]: E0328 04:25:45.204388     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 04:25:50 old-k8s-version-140381 kubelet[666]: I0328 04:25:50.204671     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 20acd12f7e916472b1e8da4e15dccc6309b7bf972b36c544acdbc4f46a36e1d1
	Mar 28 04:25:50 old-k8s-version-140381 kubelet[666]: E0328 04:25:50.205165     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	Mar 28 04:25:58 old-k8s-version-140381 kubelet[666]: E0328 04:25:58.202750     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 04:26:01 old-k8s-version-140381 kubelet[666]: I0328 04:26:01.201377     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 20acd12f7e916472b1e8da4e15dccc6309b7bf972b36c544acdbc4f46a36e1d1
	Mar 28 04:26:01 old-k8s-version-140381 kubelet[666]: E0328 04:26:01.201757     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	Mar 28 04:26:12 old-k8s-version-140381 kubelet[666]: E0328 04:26:12.202992     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 04:26:14 old-k8s-version-140381 kubelet[666]: I0328 04:26:14.202153     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 20acd12f7e916472b1e8da4e15dccc6309b7bf972b36c544acdbc4f46a36e1d1
	Mar 28 04:26:14 old-k8s-version-140381 kubelet[666]: E0328 04:26:14.202954     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	Mar 28 04:26:23 old-k8s-version-140381 kubelet[666]: E0328 04:26:23.202145     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 04:26:26 old-k8s-version-140381 kubelet[666]: I0328 04:26:26.201371     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 20acd12f7e916472b1e8da4e15dccc6309b7bf972b36c544acdbc4f46a36e1d1
	Mar 28 04:26:26 old-k8s-version-140381 kubelet[666]: E0328 04:26:26.202120     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	Mar 28 04:26:38 old-k8s-version-140381 kubelet[666]: E0328 04:26:38.206755     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 04:26:40 old-k8s-version-140381 kubelet[666]: I0328 04:26:40.201490     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 20acd12f7e916472b1e8da4e15dccc6309b7bf972b36c544acdbc4f46a36e1d1
	Mar 28 04:26:40 old-k8s-version-140381 kubelet[666]: E0328 04:26:40.201832     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	Mar 28 04:26:49 old-k8s-version-140381 kubelet[666]: E0328 04:26:49.202084     666 pod_workers.go:191] Error syncing pod daea2cfd-90d3-4662-a088-53293cb710c7 ("metrics-server-9975d5f86-cccxt_kube-system(daea2cfd-90d3-4662-a088-53293cb710c7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 04:26:55 old-k8s-version-140381 kubelet[666]: I0328 04:26:55.201373     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 20acd12f7e916472b1e8da4e15dccc6309b7bf972b36c544acdbc4f46a36e1d1
	Mar 28 04:26:55 old-k8s-version-140381 kubelet[666]: E0328 04:26:55.201721     666 pod_workers.go:191] Error syncing pod c7eedef0-88f3-488d-b88e-610a43637a0f ("dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-rgkgb_kubernetes-dashboard(c7eedef0-88f3-488d-b88e-610a43637a0f)"
	
	
	==> kubernetes-dashboard [ef51c0050c256b8c7cdb69bbc3d97166944461edd547569734337f4aed4566b0] <==
	2024/03/28 04:21:32 Using namespace: kubernetes-dashboard
	2024/03/28 04:21:32 Using in-cluster config to connect to apiserver
	2024/03/28 04:21:32 Using secret token for csrf signing
	2024/03/28 04:21:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/03/28 04:21:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/03/28 04:21:32 Successful initial request to the apiserver, version: v1.20.0
	2024/03/28 04:21:32 Generating JWE encryption key
	2024/03/28 04:21:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/03/28 04:21:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/03/28 04:21:33 Initializing JWE encryption key from synchronized object
	2024/03/28 04:21:33 Creating in-cluster Sidecar client
	2024/03/28 04:21:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 04:21:33 Serving insecurely on HTTP port: 9090
	2024/03/28 04:22:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 04:22:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 04:23:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 04:23:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 04:24:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 04:24:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 04:25:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 04:25:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 04:26:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 04:26:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 04:21:32 Starting overwatch
	
	
	==> storage-provisioner [3df02b7fe0a2dd2fd7cc44aa18b0071e373698eee6915526563ad070f5755f20] <==
	I0328 04:21:57.313128       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0328 04:21:57.332862       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0328 04:21:57.332927       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0328 04:22:14.799086       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0328 04:22:14.799470       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-140381_7dd215c1-5bee-45e8-9a04-8ee87a237471!
	I0328 04:22:14.812513       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"309cad62-8d11-4bfe-aa17-4bd4c7c655eb", APIVersion:"v1", ResourceVersion:"856", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-140381_7dd215c1-5bee-45e8-9a04-8ee87a237471 became leader
	I0328 04:22:14.900423       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-140381_7dd215c1-5bee-45e8-9a04-8ee87a237471!
	
	
	==> storage-provisioner [b73430246eaca06714d6e309922be97d41f5d26526e373b969f0fc05214f7d8a] <==
	I0328 04:21:12.132227       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0328 04:21:42.133867       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-140381 -n old-k8s-version-140381
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-140381 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-cccxt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-140381 describe pod metrics-server-9975d5f86-cccxt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-140381 describe pod metrics-server-9975d5f86-cccxt: exit status 1 (123.741915ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-cccxt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-140381 describe pod metrics-server-9975d5f86-cccxt: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (381.13s)

                                                
                                    

Test pass (296/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.22
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.37
9 TestDownloadOnly/v1.20.0/DeleteAll 0.34
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.29.3/json-events 6.96
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.19
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.30.0-beta.0/json-events 7.75
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.2
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.65
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.12
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.16
36 TestAddons/Setup 137.98
38 TestAddons/parallel/Registry 15.51
40 TestAddons/parallel/InspektorGadget 11.81
41 TestAddons/parallel/MetricsServer 5.85
44 TestAddons/parallel/CSI 71.17
45 TestAddons/parallel/Headlamp 10.28
47 TestAddons/parallel/LocalPath 10.69
48 TestAddons/parallel/NvidiaDevicePlugin 6.63
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.17
53 TestAddons/StoppedEnableDisable 12.28
54 TestCertOptions 36.69
55 TestCertExpiration 230.3
57 TestForceSystemdFlag 39.88
58 TestForceSystemdEnv 47.87
59 TestDockerEnvContainerd 49.16
64 TestErrorSpam/setup 29.67
65 TestErrorSpam/start 0.72
66 TestErrorSpam/status 0.98
67 TestErrorSpam/pause 1.69
68 TestErrorSpam/unpause 1.8
69 TestErrorSpam/stop 1.43
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 62.6
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 5.91
76 TestFunctional/serial/KubeContext 0.07
77 TestFunctional/serial/KubectlGetPods 0.1
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.99
81 TestFunctional/serial/CacheCmd/cache/add_local 1.55
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.04
86 TestFunctional/serial/CacheCmd/cache/delete 0.14
87 TestFunctional/serial/MinikubeKubectlCmd 0.16
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
89 TestFunctional/serial/ExtraConfig 43.67
90 TestFunctional/serial/ComponentHealth 0.11
91 TestFunctional/serial/LogsCmd 1.71
92 TestFunctional/serial/LogsFileCmd 1.75
93 TestFunctional/serial/InvalidService 4.6
95 TestFunctional/parallel/ConfigCmd 0.54
96 TestFunctional/parallel/DashboardCmd 9.78
97 TestFunctional/parallel/DryRun 0.51
98 TestFunctional/parallel/InternationalLanguage 0.22
99 TestFunctional/parallel/StatusCmd 1.35
103 TestFunctional/parallel/ServiceCmdConnect 10.7
104 TestFunctional/parallel/AddonsCmd 0.19
105 TestFunctional/parallel/PersistentVolumeClaim 27.22
107 TestFunctional/parallel/SSHCmd 0.67
108 TestFunctional/parallel/CpCmd 2.28
110 TestFunctional/parallel/FileSync 0.28
111 TestFunctional/parallel/CertSync 2.27
115 TestFunctional/parallel/NodeLabels 0.09
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
119 TestFunctional/parallel/License 0.28
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.49
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 7.27
132 TestFunctional/parallel/ServiceCmd/List 0.52
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
136 TestFunctional/parallel/ProfileCmd/profile_list 0.53
137 TestFunctional/parallel/ServiceCmd/Format 0.5
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.59
139 TestFunctional/parallel/ServiceCmd/URL 0.57
140 TestFunctional/parallel/MountCmd/any-port 6.83
141 TestFunctional/parallel/MountCmd/specific-port 2.17
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.31
143 TestFunctional/parallel/Version/short 0.1
144 TestFunctional/parallel/Version/components 1.25
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.76
150 TestFunctional/parallel/ImageCommands/Setup 1.92
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
161 TestFunctional/delete_addon-resizer_images 0.1
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestMultiControlPlane/serial/StartCluster 132.58
168 TestMultiControlPlane/serial/DeployApp 17.9
169 TestMultiControlPlane/serial/PingHostFromPods 1.75
170 TestMultiControlPlane/serial/AddWorkerNode 24
171 TestMultiControlPlane/serial/NodeLabels 0.12
172 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.75
173 TestMultiControlPlane/serial/CopyFile 19.7
174 TestMultiControlPlane/serial/StopSecondaryNode 13.15
175 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
176 TestMultiControlPlane/serial/RestartSecondaryNode 17.93
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.05
178 TestMultiControlPlane/serial/RestartClusterKeepsNodes 133.58
179 TestMultiControlPlane/serial/DeleteSecondaryNode 11.52
180 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
181 TestMultiControlPlane/serial/StopCluster 35.96
182 TestMultiControlPlane/serial/RestartCluster 78.17
183 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
184 TestMultiControlPlane/serial/AddSecondaryNode 44.2
185 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.79
189 TestJSONOutput/start/Command 85.81
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.72
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.64
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.75
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.23
214 TestKicCustomNetwork/create_custom_network 40.26
215 TestKicCustomNetwork/use_default_bridge_network 35.95
216 TestKicExistingNetwork 38
217 TestKicCustomSubnet 35.21
218 TestKicStaticIP 35.74
219 TestMainNoArgs 0.07
220 TestMinikubeProfile 67.2
223 TestMountStart/serial/StartWithMountFirst 6.04
224 TestMountStart/serial/VerifyMountFirst 0.26
225 TestMountStart/serial/StartWithMountSecond 7.02
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.59
228 TestMountStart/serial/VerifyMountPostDelete 0.27
229 TestMountStart/serial/Stop 1.22
230 TestMountStart/serial/RestartStopped 7.54
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestMultiNode/serial/FreshStart2Nodes 97.67
235 TestMultiNode/serial/DeployApp2Nodes 46.02
236 TestMultiNode/serial/PingHostFrom2Pods 1.07
237 TestMultiNode/serial/AddNode 16.95
238 TestMultiNode/serial/MultiNodeLabels 0.09
239 TestMultiNode/serial/ProfileList 0.33
240 TestMultiNode/serial/CopyFile 10.32
241 TestMultiNode/serial/StopNode 2.3
242 TestMultiNode/serial/StartAfterStop 9.85
243 TestMultiNode/serial/RestartKeepsNodes 84.75
244 TestMultiNode/serial/DeleteNode 5.4
245 TestMultiNode/serial/StopMultiNode 23.99
246 TestMultiNode/serial/RestartMultiNode 55.72
247 TestMultiNode/serial/ValidateNameConflict 37.07
252 TestPreload 107.78
254 TestScheduledStopUnix 106.92
257 TestInsufficientStorage 10.02
258 TestRunningBinaryUpgrade 87.96
260 TestKubernetesUpgrade 378.04
261 TestMissingContainerUpgrade 158.23
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
264 TestNoKubernetes/serial/StartWithK8s 43.48
265 TestNoKubernetes/serial/StartWithStopK8s 16.26
266 TestNoKubernetes/serial/Start 6.32
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
268 TestNoKubernetes/serial/ProfileList 1.06
269 TestNoKubernetes/serial/Stop 1.32
270 TestNoKubernetes/serial/StartNoArgs 8.47
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
272 TestStoppedBinaryUpgrade/Setup 1.33
273 TestStoppedBinaryUpgrade/Upgrade 108.89
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
283 TestPause/serial/Start 88.34
284 TestPause/serial/SecondStartNoReconfiguration 8.01
285 TestPause/serial/Pause 1.24
286 TestPause/serial/VerifyStatus 0.41
287 TestPause/serial/Unpause 1.09
288 TestPause/serial/PauseAgain 1.24
289 TestPause/serial/DeletePaused 4.66
290 TestPause/serial/VerifyDeletedResources 0.68
298 TestNetworkPlugins/group/false 5.16
303 TestStartStop/group/old-k8s-version/serial/FirstStart 152.03
304 TestStartStop/group/old-k8s-version/serial/DeployApp 8.55
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.67
307 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 98.01
308 TestStartStop/group/old-k8s-version/serial/Stop 12.38
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.3
311 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.38
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.11
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 277.09
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
320 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
321 TestStartStop/group/old-k8s-version/serial/Pause 2.96
322 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
323 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.07
325 TestStartStop/group/embed-certs/serial/FirstStart 92.13
327 TestStartStop/group/no-preload/serial/FirstStart 74.11
328 TestStartStop/group/no-preload/serial/DeployApp 7.38
329 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
330 TestStartStop/group/no-preload/serial/Stop 12.12
331 TestStartStop/group/embed-certs/serial/DeployApp 8.45
332 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
333 TestStartStop/group/no-preload/serial/SecondStart 265.74
334 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.26
335 TestStartStop/group/embed-certs/serial/Stop 12.1
336 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
337 TestStartStop/group/embed-certs/serial/SecondStart 267.53
338 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
339 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.15
340 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
341 TestStartStop/group/no-preload/serial/Pause 3.63
342 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
344 TestStartStop/group/newest-cni/serial/FirstStart 54.17
345 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.12
346 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
347 TestStartStop/group/embed-certs/serial/Pause 4
348 TestNetworkPlugins/group/auto/Start 88
349 TestStartStop/group/newest-cni/serial/DeployApp 0
350 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
351 TestStartStop/group/newest-cni/serial/Stop 1.27
352 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
353 TestStartStop/group/newest-cni/serial/SecondStart 17.29
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
357 TestStartStop/group/newest-cni/serial/Pause 3.5
358 TestNetworkPlugins/group/kindnet/Start 91.71
359 TestNetworkPlugins/group/auto/KubeletFlags 0.37
360 TestNetworkPlugins/group/auto/NetCatPod 10.43
361 TestNetworkPlugins/group/auto/DNS 0.21
362 TestNetworkPlugins/group/auto/Localhost 0.15
363 TestNetworkPlugins/group/auto/HairPin 0.15
364 TestNetworkPlugins/group/calico/Start 77.06
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
367 TestNetworkPlugins/group/kindnet/NetCatPod 8.27
368 TestNetworkPlugins/group/kindnet/DNS 0.3
369 TestNetworkPlugins/group/kindnet/Localhost 0.23
370 TestNetworkPlugins/group/kindnet/HairPin 0.21
371 TestNetworkPlugins/group/custom-flannel/Start 67.07
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.33
374 TestNetworkPlugins/group/calico/NetCatPod 10.31
375 TestNetworkPlugins/group/calico/DNS 0.31
376 TestNetworkPlugins/group/calico/Localhost 0.21
377 TestNetworkPlugins/group/calico/HairPin 0.24
378 TestNetworkPlugins/group/enable-default-cni/Start 89.66
379 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
380 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.37
381 TestNetworkPlugins/group/custom-flannel/DNS 0.35
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
384 TestNetworkPlugins/group/flannel/Start 60.89
385 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
386 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.31
387 TestNetworkPlugins/group/enable-default-cni/DNS 0.3
388 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
389 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
390 TestNetworkPlugins/group/flannel/ControllerPod 6.01
391 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
392 TestNetworkPlugins/group/flannel/NetCatPod 9.41
393 TestNetworkPlugins/group/bridge/Start 93.23
394 TestNetworkPlugins/group/flannel/DNS 0.2
395 TestNetworkPlugins/group/flannel/Localhost 0.16
396 TestNetworkPlugins/group/flannel/HairPin 0.15
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
398 TestNetworkPlugins/group/bridge/NetCatPod 10.26
399 TestNetworkPlugins/group/bridge/DNS 0.17
400 TestNetworkPlugins/group/bridge/Localhost 0.14
401 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (10.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-417144 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-417144 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.216847339s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-417144
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-417144: exit status 85 (374.361431ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-417144 | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:32 UTC |          |
	|         | -p download-only-417144        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 03:32:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 03:32:58.094774 3255403 out.go:291] Setting OutFile to fd 1 ...
	I0328 03:32:58.094913 3255403 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:32:58.094924 3255403 out.go:304] Setting ErrFile to fd 2...
	I0328 03:32:58.094929 3255403 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:32:58.095185 3255403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
	W0328 03:32:58.095315 3255403 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18485-3249988/.minikube/config/config.json: open /home/jenkins/minikube-integration/18485-3249988/.minikube/config/config.json: no such file or directory
	I0328 03:32:58.095716 3255403 out.go:298] Setting JSON to true
	I0328 03:32:58.096638 3255403 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":40516,"bootTime":1711556262,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0328 03:32:58.096710 3255403 start.go:139] virtualization:  
	I0328 03:32:58.100067 3255403 out.go:97] [download-only-417144] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 03:32:58.102109 3255403 out.go:169] MINIKUBE_LOCATION=18485
	W0328 03:32:58.100274 3255403 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball: no such file or directory
	I0328 03:32:58.100340 3255403 notify.go:220] Checking for updates...
	I0328 03:32:58.107016 3255403 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 03:32:58.109101 3255403 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 03:32:58.111275 3255403 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	I0328 03:32:58.113264 3255403 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0328 03:32:58.117241 3255403 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0328 03:32:58.117580 3255403 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 03:32:58.136211 3255403 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 03:32:58.136350 3255403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 03:32:58.197545 3255403 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-28 03:32:58.188366468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 03:32:58.197660 3255403 docker.go:295] overlay module found
	I0328 03:32:58.199498 3255403 out.go:97] Using the docker driver based on user configuration
	I0328 03:32:58.199541 3255403 start.go:297] selected driver: docker
	I0328 03:32:58.199548 3255403 start.go:901] validating driver "docker" against <nil>
	I0328 03:32:58.199673 3255403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 03:32:58.255228 3255403 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-28 03:32:58.241531847 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 03:32:58.255409 3255403 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 03:32:58.255674 3255403 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0328 03:32:58.255838 3255403 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0328 03:32:58.258090 3255403 out.go:169] Using Docker driver with root privileges
	I0328 03:32:58.259748 3255403 cni.go:84] Creating CNI manager for ""
	I0328 03:32:58.259775 3255403 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 03:32:58.259786 3255403 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 03:32:58.259867 3255403 start.go:340] cluster config:
	{Name:download-only-417144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-417144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 03:32:58.261786 3255403 out.go:97] Starting "download-only-417144" primary control-plane node in "download-only-417144" cluster
	I0328 03:32:58.261808 3255403 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0328 03:32:58.263551 3255403 out.go:97] Pulling base image v0.0.43-1711559786-18485 ...
	I0328 03:32:58.263581 3255403 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0328 03:32:58.263739 3255403 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0328 03:32:58.276667 3255403 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0328 03:32:58.277499 3255403 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory
	I0328 03:32:58.277608 3255403 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0328 03:32:58.339057 3255403 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0328 03:32:58.339083 3255403 cache.go:56] Caching tarball of preloaded images
	I0328 03:32:58.339782 3255403 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0328 03:32:58.342109 3255403 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0328 03:32:58.342128 3255403 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0328 03:32:58.457551 3255403 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0328 03:33:03.272767 3255403 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 as a tarball
	I0328 03:33:04.001440 3255403 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0328 03:33:04.001565 3255403 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0328 03:33:05.091554 3255403 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0328 03:33:05.091973 3255403 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/download-only-417144/config.json ...
	I0328 03:33:05.092014 3255403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/download-only-417144/config.json: {Name:mk2c3ff05b4a074c5fc971137a0273be9f5907d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 03:33:05.092232 3255403 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0328 03:33:05.092991 3255403 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-417144 host does not exist
	  To start a cluster, run: "minikube start -p download-only-417144"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-417144
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (6.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-613150 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-613150 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.955913523s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (6.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-613150
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-613150: exit status 85 (78.019989ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-417144 | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:32 UTC |                     |
	|         | -p download-only-417144        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| delete  | -p download-only-417144        | download-only-417144 | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| start   | -o=json --download-only        | download-only-613150 | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC |                     |
	|         | -p download-only-613150        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 03:33:09
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 03:33:09.236383 3255572 out.go:291] Setting OutFile to fd 1 ...
	I0328 03:33:09.236519 3255572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:33:09.236529 3255572 out.go:304] Setting ErrFile to fd 2...
	I0328 03:33:09.236535 3255572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:33:09.236785 3255572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
	I0328 03:33:09.237183 3255572 out.go:298] Setting JSON to true
	I0328 03:33:09.238038 3255572 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":40527,"bootTime":1711556262,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0328 03:33:09.238108 3255572 start.go:139] virtualization:  
	I0328 03:33:09.242531 3255572 out.go:97] [download-only-613150] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 03:33:09.244932 3255572 out.go:169] MINIKUBE_LOCATION=18485
	I0328 03:33:09.242843 3255572 notify.go:220] Checking for updates...
	I0328 03:33:09.249116 3255572 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 03:33:09.251118 3255572 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 03:33:09.253390 3255572 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	I0328 03:33:09.255359 3255572 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0328 03:33:09.259598 3255572 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0328 03:33:09.259892 3255572 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 03:33:09.282330 3255572 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 03:33:09.282432 3255572 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 03:33:09.350700 3255572 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-28 03:33:09.339232029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 03:33:09.350818 3255572 docker.go:295] overlay module found
	I0328 03:33:09.353041 3255572 out.go:97] Using the docker driver based on user configuration
	I0328 03:33:09.353071 3255572 start.go:297] selected driver: docker
	I0328 03:33:09.353078 3255572 start.go:901] validating driver "docker" against <nil>
	I0328 03:33:09.353199 3255572 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 03:33:09.409450 3255572 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-28 03:33:09.400733709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 03:33:09.409621 3255572 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 03:33:09.409924 3255572 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0328 03:33:09.410079 3255572 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0328 03:33:09.412108 3255572 out.go:169] Using Docker driver with root privileges
	I0328 03:33:09.413727 3255572 cni.go:84] Creating CNI manager for ""
	I0328 03:33:09.413751 3255572 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 03:33:09.413761 3255572 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 03:33:09.413846 3255572 start.go:340] cluster config:
	{Name:download-only-613150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-613150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 03:33:09.415822 3255572 out.go:97] Starting "download-only-613150" primary control-plane node in "download-only-613150" cluster
	I0328 03:33:09.415850 3255572 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0328 03:33:09.417936 3255572 out.go:97] Pulling base image v0.0.43-1711559786-18485 ...
	I0328 03:33:09.417968 3255572 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0328 03:33:09.418139 3255572 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0328 03:33:09.431040 3255572 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0328 03:33:09.431176 3255572 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory
	I0328 03:33:09.431200 3255572 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory, skipping pull
	I0328 03:33:09.431208 3255572 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in cache, skipping pull
	I0328 03:33:09.431216 3255572 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 as a tarball
	I0328 03:33:09.498875 3255572 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0328 03:33:09.498906 3255572 cache.go:56] Caching tarball of preloaded images
	I0328 03:33:09.499069 3255572 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0328 03:33:09.501074 3255572 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0328 03:33:09.501092 3255572 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 ...
	I0328 03:33:09.622217 3255572 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:663a9a795decbfebeb48b89f3f24d179 -> /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-613150 host does not exist
	  To start a cluster, run: "minikube start -p download-only-613150"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-613150
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (7.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-831467 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-831467 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.749240761s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (7.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-831467
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-831467: exit status 85 (84.831245ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-417144 | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:32 UTC |                     |
	|         | -p download-only-417144             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| delete  | -p download-only-417144             | download-only-417144 | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| start   | -o=json --download-only             | download-only-613150 | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC |                     |
	|         | -p download-only-613150             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| delete  | -p download-only-613150             | download-only-613150 | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC | 28 Mar 24 03:33 UTC |
	| start   | -o=json --download-only             | download-only-831467 | jenkins | v1.33.0-beta.0 | 28 Mar 24 03:33 UTC |                     |
	|         | -p download-only-831467             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 03:33:16
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 03:33:16.593048 3255733 out.go:291] Setting OutFile to fd 1 ...
	I0328 03:33:16.593263 3255733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:33:16.593291 3255733 out.go:304] Setting ErrFile to fd 2...
	I0328 03:33:16.593310 3255733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:33:16.593708 3255733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
	I0328 03:33:16.594239 3255733 out.go:298] Setting JSON to true
	I0328 03:33:16.595187 3255733 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":40535,"bootTime":1711556262,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0328 03:33:16.595572 3255733 start.go:139] virtualization:  
	I0328 03:33:16.598969 3255733 out.go:97] [download-only-831467] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 03:33:16.601034 3255733 out.go:169] MINIKUBE_LOCATION=18485
	I0328 03:33:16.599191 3255733 notify.go:220] Checking for updates...
	I0328 03:33:16.603232 3255733 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 03:33:16.605322 3255733 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 03:33:16.607044 3255733 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	I0328 03:33:16.608873 3255733 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0328 03:33:16.613051 3255733 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0328 03:33:16.613316 3255733 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 03:33:16.631735 3255733 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 03:33:16.631842 3255733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 03:33:16.691246 3255733 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-28 03:33:16.682515523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 03:33:16.691356 3255733 docker.go:295] overlay module found
	I0328 03:33:16.693513 3255733 out.go:97] Using the docker driver based on user configuration
	I0328 03:33:16.693556 3255733 start.go:297] selected driver: docker
	I0328 03:33:16.693564 3255733 start.go:901] validating driver "docker" against <nil>
	I0328 03:33:16.693672 3255733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 03:33:16.755829 3255733 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-28 03:33:16.747456237 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 03:33:16.756003 3255733 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 03:33:16.756267 3255733 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0328 03:33:16.756480 3255733 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0328 03:33:16.758863 3255733 out.go:169] Using Docker driver with root privileges
	I0328 03:33:16.760975 3255733 cni.go:84] Creating CNI manager for ""
	I0328 03:33:16.760997 3255733 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0328 03:33:16.761008 3255733 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 03:33:16.761088 3255733 start.go:340] cluster config:
	{Name:download-only-831467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-831467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I0328 03:33:16.763116 3255733 out.go:97] Starting "download-only-831467" primary control-plane node in "download-only-831467" cluster
	I0328 03:33:16.763135 3255733 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0328 03:33:16.764957 3255733 out.go:97] Pulling base image v0.0.43-1711559786-18485 ...
	I0328 03:33:16.764981 3255733 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0328 03:33:16.765164 3255733 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0328 03:33:16.778748 3255733 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0328 03:33:16.778880 3255733 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory
	I0328 03:33:16.778910 3255733 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory, skipping pull
	I0328 03:33:16.778920 3255733 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in cache, skipping pull
	I0328 03:33:16.778928 3255733 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 as a tarball
	I0328 03:33:16.836802 3255733 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I0328 03:33:16.836835 3255733 cache.go:56] Caching tarball of preloaded images
	I0328 03:33:16.837452 3255733 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0328 03:33:16.840025 3255733 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0328 03:33:16.840053 3255733 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4 ...
	I0328 03:33:16.958270 3255733 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:f676343275e1172ac594af64d6d0592a -> /home/jenkins/minikube-integration/18485-3249988/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-831467 host does not exist
	  To start a cluster, run: "minikube start -p download-only-831467"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-831467
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-112527 --alsologtostderr --binary-mirror http://127.0.0.1:33653 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-112527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-112527
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.12s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-340351
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-340351: exit status 85 (123.671739ms)

                                                
                                                
-- stdout --
	* Profile "addons-340351" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-340351"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.12s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.16s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-340351
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-340351: exit status 85 (154.978338ms)

                                                
                                                
-- stdout --
	* Profile "addons-340351" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-340351"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.16s)

                                                
                                    
x
+
TestAddons/Setup (137.98s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-340351 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-340351 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m17.974099454s)
--- PASS: TestAddons/Setup (137.98s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 49.079257ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-l2d8j" [efbdf6d1-f769-43b5-92a9-b4b43129bbc9] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005489981s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qdjhx" [48e81d37-f08c-4677-a66e-2dc91903192d] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00675682s
addons_test.go:340: (dbg) Run:  kubectl --context addons-340351 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-340351 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-340351 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.356283928s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-340351 ip
2024/03/28 03:35:59 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-340351 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.51s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-h684r" [a9e914c2-5537-4515-9628-6d9257c3d40d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00519376s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-340351
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-340351: (5.806350092s)
--- PASS: TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.659871ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-87zwk" [912dbcd4-98b5-4145-a0ad-4cfa8d5f457c] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005836865s
addons_test.go:415: (dbg) Run:  kubectl --context addons-340351 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-340351 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (71.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 49.404162ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-340351 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-340351 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d180d49b-3fd8-49b0-a963-563372aaac23] Pending
helpers_test.go:344: "task-pv-pod" [d180d49b-3fd8-49b0-a963-563372aaac23] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d180d49b-3fd8-49b0-a963-563372aaac23] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004176193s
addons_test.go:584: (dbg) Run:  kubectl --context addons-340351 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-340351 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-340351 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-340351 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-340351 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-340351 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-340351 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [94acbf7e-4920-4d58-a311-5988160a10f0] Pending
helpers_test.go:344: "task-pv-pod-restore" [94acbf7e-4920-4d58-a311-5988160a10f0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [94acbf7e-4920-4d58-a311-5988160a10f0] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004135073s
addons_test.go:626: (dbg) Run:  kubectl --context addons-340351 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-340351 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-340351 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-340351 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-340351 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.875504071s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-340351 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (71.17s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-340351 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-340351 --alsologtostderr -v=1: (1.279204672s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-fvckf" [edc71b07-cfbc-4989-8bac-1c5d96ed4f58] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-fvckf" [edc71b07-cfbc-4989-8bac-1c5d96ed4f58] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-fvckf" [edc71b07-cfbc-4989-8bac-1c5d96ed4f58] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004000112s
--- PASS: TestAddons/parallel/Headlamp (10.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.69s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-340351 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-340351 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340351 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [35e5557e-29b4-494a-b290-633e883d508c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [35e5557e-29b4-494a-b290-633e883d508c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [35e5557e-29b4-494a-b290-633e883d508c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00459298s
addons_test.go:891: (dbg) Run:  kubectl --context addons-340351 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-340351 ssh "cat /opt/local-path-provisioner/pvc-909410b4-b229-4cbd-b1e7-84a050013630_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-340351 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-340351 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-340351 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.69s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-24zx7" [87d15db8-a090-4212-9d30-443f2319b151] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007336556s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-340351
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.63s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-mhzbx" [f0e05df0-e7f1-4bf4-88f6-0973d642e67f] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004032722s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-340351 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-340351 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-340351
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-340351: (11.983696196s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-340351
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-340351
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-340351
--- PASS: TestAddons/StoppedEnableDisable (12.28s)

                                                
                                    
x
+
TestCertOptions (36.69s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-503492 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-503492 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.035620129s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-503492 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-503492 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-503492 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-503492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-503492
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-503492: (1.941491135s)
--- PASS: TestCertOptions (36.69s)

                                                
                                    
x
+
TestCertExpiration (230.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-834080 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-834080 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.877223178s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-834080 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-834080 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.995536724s)
helpers_test.go:175: Cleaning up "cert-expiration-834080" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-834080
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-834080: (2.423347408s)
--- PASS: TestCertExpiration (230.30s)

                                                
                                    
x
+
TestForceSystemdFlag (39.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-034881 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-034881 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.348594363s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-034881 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-034881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-034881
E0328 04:16:37.937362 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-034881: (2.164590547s)
--- PASS: TestForceSystemdFlag (39.88s)

                                                
                                    
x
+
TestForceSystemdEnv (47.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-003239 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-003239 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (45.067903346s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-003239 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-003239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-003239
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-003239: (2.335697526s)
--- PASS: TestForceSystemdEnv (47.87s)

                                                
                                    
x
+
TestDockerEnvContainerd (49.16s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-511678 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-511678 --driver=docker  --container-runtime=containerd: (33.208245208s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-511678"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-511678": (1.142629653s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-NWZLbRihohWW/agent.3273753" SSH_AGENT_PID="3273754" DOCKER_HOST=ssh://docker@127.0.0.1:36234 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-NWZLbRihohWW/agent.3273753" SSH_AGENT_PID="3273754" DOCKER_HOST=ssh://docker@127.0.0.1:36234 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-NWZLbRihohWW/agent.3273753" SSH_AGENT_PID="3273754" DOCKER_HOST=ssh://docker@127.0.0.1:36234 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.448538604s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-NWZLbRihohWW/agent.3273753" SSH_AGENT_PID="3273754" DOCKER_HOST=ssh://docker@127.0.0.1:36234 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-511678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-511678
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-511678: (1.9266455s)
--- PASS: TestDockerEnvContainerd (49.16s)

                                                
                                    
x
+
TestErrorSpam/setup (29.67s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-049671 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-049671 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-049671 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-049671 --driver=docker  --container-runtime=containerd: (29.668237896s)
--- PASS: TestErrorSpam/setup (29.67s)

                                                
                                    
x
+
TestErrorSpam/start (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 start --dry-run
--- PASS: TestErrorSpam/start (0.72s)

                                                
                                    
x
+
TestErrorSpam/status (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 status
--- PASS: TestErrorSpam/status (0.98s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 stop: (1.231637487s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049671 --log_dir /tmp/nospam-049671 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18485-3249988/.minikube/files/etc/test/nested/copy/3255398/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-376731 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-376731 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m2.594777953s)
--- PASS: TestFunctional/serial/StartWithProxy (62.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.91s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-376731 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-376731 --alsologtostderr -v=8: (5.909507276s)
functional_test.go:659: soft start took 5.913030156s for "functional-376731" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.91s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-376731 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-376731 cache add registry.k8s.io/pause:3.1: (1.450800978s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-376731 cache add registry.k8s.io/pause:3.3: (1.322503193s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-376731 cache add registry.k8s.io/pause:latest: (1.216986254s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-376731 /tmp/TestFunctionalserialCacheCmdcacheadd_local226531046/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 cache add minikube-local-cache-test:functional-376731
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 cache delete minikube-local-cache-test:functional-376731
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-376731
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376731 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (319.794228ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-376731 cache reload: (1.08044991s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 kubectl -- --context functional-376731 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-376731 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-376731 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0328 03:40:44.463919 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 03:40:44.471213 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 03:40:44.481534 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 03:40:44.501794 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 03:40:44.542062 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 03:40:44.622414 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 03:40:44.782796 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 03:40:45.103502 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 03:40:45.745527 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 03:40:47.025803 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 03:40:49.586049 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 03:40:54.706872 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 03:41:04.947314 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 03:41:25.428034 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-376731 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.674615611s)
functional_test.go:757: restart took 43.674724589s for "functional-376731" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-376731 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-376731 logs: (1.714778156s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 logs --file /tmp/TestFunctionalserialLogsFileCmd3569139366/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-376731 logs --file /tmp/TestFunctionalserialLogsFileCmd3569139366/001/logs.txt: (1.747484645s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.6s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-376731 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-376731
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-376731: exit status 115 (388.014056ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30403 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-376731 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376731 config get cpus: exit status 14 (99.941413ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376731 config get cpus: exit status 14 (87.197472ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-376731 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-376731 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3287871: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.78s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-376731 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-376731 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (253.888759ms)

                                                
                                                
-- stdout --
	* [functional-376731] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 03:42:09.118389 3287528 out.go:291] Setting OutFile to fd 1 ...
	I0328 03:42:09.118628 3287528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:42:09.118655 3287528 out.go:304] Setting ErrFile to fd 2...
	I0328 03:42:09.118685 3287528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:42:09.119006 3287528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
	I0328 03:42:09.119514 3287528 out.go:298] Setting JSON to false
	I0328 03:42:09.123014 3287528 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":41067,"bootTime":1711556262,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0328 03:42:09.123149 3287528 start.go:139] virtualization:  
	I0328 03:42:09.126521 3287528 out.go:177] * [functional-376731] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 03:42:09.129416 3287528 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 03:42:09.129487 3287528 notify.go:220] Checking for updates...
	I0328 03:42:09.135637 3287528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 03:42:09.137562 3287528 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 03:42:09.139418 3287528 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	I0328 03:42:09.142407 3287528 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 03:42:09.144465 3287528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 03:42:09.147518 3287528 config.go:182] Loaded profile config "functional-376731": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 03:42:09.148085 3287528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 03:42:09.189740 3287528 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 03:42:09.189863 3287528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 03:42:09.253494 3287528 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-03-28 03:42:09.243748747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 03:42:09.253610 3287528 docker.go:295] overlay module found
	I0328 03:42:09.256061 3287528 out.go:177] * Using the docker driver based on existing profile
	I0328 03:42:09.258707 3287528 start.go:297] selected driver: docker
	I0328 03:42:09.258741 3287528 start.go:901] validating driver "docker" against &{Name:functional-376731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-376731 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 03:42:09.258849 3287528 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 03:42:09.262156 3287528 out.go:177] 
	W0328 03:42:09.264666 3287528 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0328 03:42:09.266570 3287528 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-376731 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-376731 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-376731 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (214.95062ms)

                                                
                                                
-- stdout --
	* [functional-376731] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 03:42:08.866196 3287488 out.go:291] Setting OutFile to fd 1 ...
	I0328 03:42:08.866445 3287488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:42:08.866459 3287488 out.go:304] Setting ErrFile to fd 2...
	I0328 03:42:08.866465 3287488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:42:08.867807 3287488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
	I0328 03:42:08.868211 3287488 out.go:298] Setting JSON to false
	I0328 03:42:08.869274 3287488 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":41067,"bootTime":1711556262,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0328 03:42:08.869349 3287488 start.go:139] virtualization:  
	I0328 03:42:08.872703 3287488 out.go:177] * [functional-376731] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	I0328 03:42:08.875380 3287488 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 03:42:08.875452 3287488 notify.go:220] Checking for updates...
	I0328 03:42:08.879681 3287488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 03:42:08.881492 3287488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 03:42:08.883417 3287488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	I0328 03:42:08.885214 3287488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 03:42:08.887049 3287488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 03:42:08.889478 3287488 config.go:182] Loaded profile config "functional-376731": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 03:42:08.889996 3287488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 03:42:08.914215 3287488 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 03:42:08.914371 3287488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 03:42:08.994731 3287488 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-03-28 03:42:08.984283554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 03:42:08.994849 3287488 docker.go:295] overlay module found
	I0328 03:42:09.002386 3287488 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0328 03:42:09.004718 3287488 start.go:297] selected driver: docker
	I0328 03:42:09.004746 3287488 start.go:901] validating driver "docker" against &{Name:functional-376731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-376731 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 03:42:09.004898 3287488 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 03:42:09.008003 3287488 out.go:177] 
	W0328 03:42:09.010092 3287488 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0328 03:42:09.011903 3287488 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-376731 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-376731 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-4r4l4" [56bdb3c3-5f5d-4aac-98a8-6ed05dccdcc7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-4r4l4" [56bdb3c3-5f5d-4aac-98a8-6ed05dccdcc7] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003643752s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30296
functional_test.go:1671: http://192.168.49.2:30296: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-4r4l4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30296
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8950388a-5405-4952-8f87-41b689beaef0] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004171536s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-376731 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-376731 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-376731 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-376731 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ee054d6a-7cc5-4a68-9574-0d75f2520ce5] Pending
helpers_test.go:344: "sp-pod" [ee054d6a-7cc5-4a68-9574-0d75f2520ce5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ee054d6a-7cc5-4a68-9574-0d75f2520ce5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003954538s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-376731 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-376731 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-376731 delete -f testdata/storage-provisioner/pod.yaml: (1.114237298s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-376731 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dbfe141e-4065-44a4-aa57-1700a20e63bd] Pending
helpers_test.go:344: "sp-pod" [dbfe141e-4065-44a4-aa57-1700a20e63bd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dbfe141e-4065-44a4-aa57-1700a20e63bd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004318032s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-376731 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.22s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh -n functional-376731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 cp functional-376731:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd725284856/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh -n functional-376731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh -n functional-376731 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/3255398/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "sudo cat /etc/test/nested/copy/3255398/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/3255398.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "sudo cat /etc/ssl/certs/3255398.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/3255398.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "sudo cat /usr/share/ca-certificates/3255398.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/32553982.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "sudo cat /etc/ssl/certs/32553982.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/32553982.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "sudo cat /usr/share/ca-certificates/32553982.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-376731 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "sudo systemctl is-active docker"
2024/03/28 03:42:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376731 ssh "sudo systemctl is-active docker": exit status 1 (308.411076ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376731 ssh "sudo systemctl is-active crio": exit status 1 (277.830417ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-376731 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-376731 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-376731 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-376731 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3285280: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-376731 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-376731 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3d3560a4-868c-4fe7-a2ed-8aef96b964d4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3d3560a4-868c-4fe7-a2ed-8aef96b964d4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.006372326s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-376731 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.182.106 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-376731 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-376731 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-376731 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-nhczm" [b7e129cf-5fc3-4587-a299-5763a6ba8af9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-nhczm" [b7e129cf-5fc3-4587-a299-5763a6ba8af9] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005117352s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 service list -o json
functional_test.go:1490: Took "559.452602ms" to run "out/minikube-linux-arm64 -p functional-376731 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30281
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
E0328 03:42:06.388482 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
functional_test.go:1311: Took "412.681475ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "116.026123ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "469.002856ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "119.826809ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30281
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-376731 /tmp/TestFunctionalparallelMountCmdany-port3046043288/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1711597327179281937" to /tmp/TestFunctionalparallelMountCmdany-port3046043288/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1711597327179281937" to /tmp/TestFunctionalparallelMountCmdany-port3046043288/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1711597327179281937" to /tmp/TestFunctionalparallelMountCmdany-port3046043288/001/test-1711597327179281937
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 28 03:42 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 28 03:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 28 03:42 test-1711597327179281937
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh cat /mount-9p/test-1711597327179281937
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-376731 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d68616da-6b53-4c6e-84c1-a291029237db] Pending
helpers_test.go:344: "busybox-mount" [d68616da-6b53-4c6e-84c1-a291029237db] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d68616da-6b53-4c6e-84c1-a291029237db] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d68616da-6b53-4c6e-84c1-a291029237db] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.006265125s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-376731 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-376731 /tmp/TestFunctionalparallelMountCmdany-port3046043288/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-376731 /tmp/TestFunctionalparallelMountCmdspecific-port2269106730/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376731 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (521.987259ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-376731 /tmp/TestFunctionalparallelMountCmdspecific-port2269106730/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376731 ssh "sudo umount -f /mount-9p": exit status 1 (360.94947ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-376731 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-376731 /tmp/TestFunctionalparallelMountCmdspecific-port2269106730/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-376731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1607281493/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-376731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1607281493/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-376731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1607281493/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376731 ssh "findmnt -T" /mount1: exit status 1 (791.454002ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-376731 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-376731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1607281493/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-376731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1607281493/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-376731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1607281493/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-376731 version -o=json --components: (1.249887981s)
--- PASS: TestFunctional/parallel/Version/components (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-376731 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-376731
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-376731 image ls --format short --alsologtostderr:
I0328 03:42:36.216107 3290083 out.go:291] Setting OutFile to fd 1 ...
I0328 03:42:36.216699 3290083 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 03:42:36.216716 3290083 out.go:304] Setting ErrFile to fd 2...
I0328 03:42:36.216723 3290083 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 03:42:36.217021 3290083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
I0328 03:42:36.217695 3290083 config.go:182] Loaded profile config "functional-376731": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 03:42:36.217849 3290083 config.go:182] Loaded profile config "functional-376731": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 03:42:36.218340 3290083 cli_runner.go:164] Run: docker container inspect functional-376731 --format={{.State.Status}}
I0328 03:42:36.244796 3290083 ssh_runner.go:195] Run: systemctl --version
I0328 03:42:36.244853 3290083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-376731
I0328 03:42:36.268645 3290083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36244 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/functional-376731/id_rsa Username:docker}
I0328 03:42:36.376212 3290083 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-376731 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-proxy                  | v1.29.3            | sha256:0e9b4a | 25MB   |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:014faa | 66.2MB |
| registry.k8s.io/kube-scheduler              | v1.29.3            | sha256:4b51f9 | 16.9MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/minikube-local-cache-test | functional-376731  | sha256:0b2c86 | 991B   |
| registry.k8s.io/kube-apiserver              | v1.29.3            | sha256:258111 | 32.1MB |
| registry.k8s.io/kube-controller-manager     | v1.29.3            | sha256:121d70 | 30.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4740c1 | 25.3MB |
| docker.io/library/nginx                     | alpine             | sha256:b8c826 | 17.6MB |
| docker.io/library/nginx                     | latest             | sha256:070027 | 67.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-376731 image ls --format table --alsologtostderr:
I0328 03:42:36.550398 3290146 out.go:291] Setting OutFile to fd 1 ...
I0328 03:42:36.550535 3290146 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 03:42:36.550546 3290146 out.go:304] Setting ErrFile to fd 2...
I0328 03:42:36.550552 3290146 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 03:42:36.550792 3290146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
I0328 03:42:36.551415 3290146 config.go:182] Loaded profile config "functional-376731": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 03:42:36.551534 3290146 config.go:182] Loaded profile config "functional-376731": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 03:42:36.551994 3290146 cli_runner.go:164] Run: docker container inspect functional-376731 --format={{.State.Status}}
I0328 03:42:36.593622 3290146 ssh_runner.go:195] Run: systemctl --version
I0328 03:42:36.593680 3290146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-376731
I0328 03:42:36.611771 3290146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36244 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/functional-376731/id_rsa Username:docker}
I0328 03:42:36.712993 3290146 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-376731 image ls --format json --alsologtostderr:
[{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"66189079"},{"id":"sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"32143347"},{"id":"sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775","repoDigests":["registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["regi
stry.k8s.io/kube-proxy:v1.29.3"],"size":"25039677"},{"id":"sha256:0b2c86877eb9fa50d0752d8a13a8c11a870de5aa63c21797d054c13bd1e40cdd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-376731"],"size":"991"},{"id":"sha256:070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53","repoDigests":["docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"67216851"},{"id":"sha256:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"16931371"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["
registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"30578527"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"1648258
1"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:lates
t"],"size":"71300"},{"id":"sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"25336339"},{"id":"sha256:b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0","repoDigests":["docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17601398"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-376731 image ls --format json --alsologtostderr:
I0328 03:42:36.854080 3290225 out.go:291] Setting OutFile to fd 1 ...
I0328 03:42:36.854236 3290225 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 03:42:36.854249 3290225 out.go:304] Setting ErrFile to fd 2...
I0328 03:42:36.854255 3290225 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 03:42:36.854481 3290225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
I0328 03:42:36.855117 3290225 config.go:182] Loaded profile config "functional-376731": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 03:42:36.855237 3290225 config.go:182] Loaded profile config "functional-376731": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 03:42:36.855722 3290225 cli_runner.go:164] Run: docker container inspect functional-376731 --format={{.State.Status}}
I0328 03:42:36.877775 3290225 ssh_runner.go:195] Run: systemctl --version
I0328 03:42:36.877830 3290225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-376731
I0328 03:42:36.895012 3290225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36244 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/functional-376731/id_rsa Username:docker}
I0328 03:42:36.997132 3290225 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-376731 image ls --format yaml --alsologtostderr:
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "32143347"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "25336339"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "66189079"
- id: sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "25039677"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:0b2c86877eb9fa50d0752d8a13a8c11a870de5aa63c21797d054c13bd1e40cdd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-376731
size: "991"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0
repoDigests:
- docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742
repoTags:
- docker.io/library/nginx:alpine
size: "17601398"
- id: sha256:070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53
repoDigests:
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "67216851"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "30578527"
- id: sha256:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "16931371"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-376731 image ls --format yaml --alsologtostderr:
I0328 03:42:36.208407 3290084 out.go:291] Setting OutFile to fd 1 ...
I0328 03:42:36.208583 3290084 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 03:42:36.208606 3290084 out.go:304] Setting ErrFile to fd 2...
I0328 03:42:36.208625 3290084 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 03:42:36.208888 3290084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
I0328 03:42:36.209515 3290084 config.go:182] Loaded profile config "functional-376731": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 03:42:36.209700 3290084 config.go:182] Loaded profile config "functional-376731": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 03:42:36.210201 3290084 cli_runner.go:164] Run: docker container inspect functional-376731 --format={{.State.Status}}
I0328 03:42:36.231057 3290084 ssh_runner.go:195] Run: systemctl --version
I0328 03:42:36.231113 3290084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-376731
I0328 03:42:36.252063 3290084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36244 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/functional-376731/id_rsa Username:docker}
I0328 03:42:36.353320 3290084 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376731 ssh pgrep buildkitd: exit status 1 (322.530713ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image build -t localhost/my-image:functional-376731 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-376731 image build -t localhost/my-image:functional-376731 testdata/build --alsologtostderr: (2.191994958s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-376731 image build -t localhost/my-image:functional-376731 testdata/build --alsologtostderr:
I0328 03:42:36.813686 3290220 out.go:291] Setting OutFile to fd 1 ...
I0328 03:42:36.814767 3290220 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 03:42:36.814782 3290220 out.go:304] Setting ErrFile to fd 2...
I0328 03:42:36.814787 3290220 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 03:42:36.815051 3290220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
I0328 03:42:36.815673 3290220 config.go:182] Loaded profile config "functional-376731": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 03:42:36.816225 3290220 config.go:182] Loaded profile config "functional-376731": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0328 03:42:36.816777 3290220 cli_runner.go:164] Run: docker container inspect functional-376731 --format={{.State.Status}}
I0328 03:42:36.844742 3290220 ssh_runner.go:195] Run: systemctl --version
I0328 03:42:36.844804 3290220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-376731
I0328 03:42:36.862287 3290220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36244 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/functional-376731/id_rsa Username:docker}
I0328 03:42:36.960669 3290220 build_images.go:161] Building image from path: /tmp/build.3251610202.tar
I0328 03:42:36.960757 3290220 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0328 03:42:36.969782 3290220 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3251610202.tar
I0328 03:42:36.973340 3290220 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3251610202.tar: stat -c "%s %y" /var/lib/minikube/build/build.3251610202.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3251610202.tar': No such file or directory
I0328 03:42:36.973374 3290220 ssh_runner.go:362] scp /tmp/build.3251610202.tar --> /var/lib/minikube/build/build.3251610202.tar (3072 bytes)
I0328 03:42:37.012710 3290220 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3251610202
I0328 03:42:37.024112 3290220 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3251610202 -xf /var/lib/minikube/build/build.3251610202.tar
I0328 03:42:37.035473 3290220 containerd.go:394] Building image: /var/lib/minikube/build/build.3251610202
I0328 03:42:37.035563 3290220 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3251610202 --local dockerfile=/var/lib/minikube/build/build.3251610202 --output type=image,name=localhost/my-image:functional-376731
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:2034f82fc344115cfa8a31d7a0a4381f79e82400063a8b0bd02c8eb94db6d9ef
#8 exporting manifest sha256:2034f82fc344115cfa8a31d7a0a4381f79e82400063a8b0bd02c8eb94db6d9ef 0.0s done
#8 exporting config sha256:d124ed6766b07477dedc02de247e2c86e49959528006fa6c9c5a9a491d89f9ee 0.0s done
#8 naming to localhost/my-image:functional-376731 done
#8 DONE 0.2s
I0328 03:42:38.893737 3290220 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3251610202 --local dockerfile=/var/lib/minikube/build/build.3251610202 --output type=image,name=localhost/my-image:functional-376731: (1.858141249s)
I0328 03:42:38.893824 3290220 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3251610202
I0328 03:42:38.902644 3290220 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3251610202.tar
I0328 03:42:38.911610 3290220 build_images.go:217] Built localhost/my-image:functional-376731 from /tmp/build.3251610202.tar
I0328 03:42:38.911645 3290220 build_images.go:133] succeeded building to: functional-376731
I0328 03:42:38.911651 3290220 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.890659686s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-376731
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image rm gcr.io/google-containers/addon-resizer:functional-376731 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-376731
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-376731 image save --daemon gcr.io/google-containers/addon-resizer:functional-376731 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-376731
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-376731
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-376731
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-376731
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (132.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-923279 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0328 03:43:28.309351 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-923279 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m11.723582091s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (132.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (17.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-923279 -- rollout status deployment/busybox: (14.772209008s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- exec busybox-7fdf7869d9-7hv7b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- exec busybox-7fdf7869d9-xrls7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- exec busybox-7fdf7869d9-z5x29 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- exec busybox-7fdf7869d9-7hv7b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- exec busybox-7fdf7869d9-xrls7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- exec busybox-7fdf7869d9-z5x29 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- exec busybox-7fdf7869d9-7hv7b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- exec busybox-7fdf7869d9-xrls7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- exec busybox-7fdf7869d9-z5x29 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (17.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- exec busybox-7fdf7869d9-7hv7b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- exec busybox-7fdf7869d9-7hv7b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- exec busybox-7fdf7869d9-xrls7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- exec busybox-7fdf7869d9-xrls7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- exec busybox-7fdf7869d9-z5x29 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-923279 -- exec busybox-7fdf7869d9-z5x29 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-923279 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-923279 -v=7 --alsologtostderr: (22.934666874s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-923279 status -v=7 --alsologtostderr: (1.063483615s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-923279 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp testdata/cp-test.txt ha-923279:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2402099802/001/cp-test_ha-923279.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279:/home/docker/cp-test.txt ha-923279-m02:/home/docker/cp-test_ha-923279_ha-923279-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m02 "sudo cat /home/docker/cp-test_ha-923279_ha-923279-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279:/home/docker/cp-test.txt ha-923279-m03:/home/docker/cp-test_ha-923279_ha-923279-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m03 "sudo cat /home/docker/cp-test_ha-923279_ha-923279-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279:/home/docker/cp-test.txt ha-923279-m04:/home/docker/cp-test_ha-923279_ha-923279-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279 "sudo cat /home/docker/cp-test.txt"
E0328 03:45:44.462011 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m04 "sudo cat /home/docker/cp-test_ha-923279_ha-923279-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp testdata/cp-test.txt ha-923279-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2402099802/001/cp-test_ha-923279-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279-m02:/home/docker/cp-test.txt ha-923279:/home/docker/cp-test_ha-923279-m02_ha-923279.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279 "sudo cat /home/docker/cp-test_ha-923279-m02_ha-923279.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279-m02:/home/docker/cp-test.txt ha-923279-m03:/home/docker/cp-test_ha-923279-m02_ha-923279-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m03 "sudo cat /home/docker/cp-test_ha-923279-m02_ha-923279-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279-m02:/home/docker/cp-test.txt ha-923279-m04:/home/docker/cp-test_ha-923279-m02_ha-923279-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m04 "sudo cat /home/docker/cp-test_ha-923279-m02_ha-923279-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp testdata/cp-test.txt ha-923279-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2402099802/001/cp-test_ha-923279-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279-m03:/home/docker/cp-test.txt ha-923279:/home/docker/cp-test_ha-923279-m03_ha-923279.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279 "sudo cat /home/docker/cp-test_ha-923279-m03_ha-923279.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279-m03:/home/docker/cp-test.txt ha-923279-m02:/home/docker/cp-test_ha-923279-m03_ha-923279-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m02 "sudo cat /home/docker/cp-test_ha-923279-m03_ha-923279-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279-m03:/home/docker/cp-test.txt ha-923279-m04:/home/docker/cp-test_ha-923279-m03_ha-923279-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m04 "sudo cat /home/docker/cp-test_ha-923279-m03_ha-923279-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp testdata/cp-test.txt ha-923279-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2402099802/001/cp-test_ha-923279-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279-m04:/home/docker/cp-test.txt ha-923279:/home/docker/cp-test_ha-923279-m04_ha-923279.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279 "sudo cat /home/docker/cp-test_ha-923279-m04_ha-923279.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279-m04:/home/docker/cp-test.txt ha-923279-m02:/home/docker/cp-test_ha-923279-m04_ha-923279-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m02 "sudo cat /home/docker/cp-test_ha-923279-m04_ha-923279-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 cp ha-923279-m04:/home/docker/cp-test.txt ha-923279-m03:/home/docker/cp-test_ha-923279-m04_ha-923279-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 ssh -n ha-923279-m03 "sudo cat /home/docker/cp-test_ha-923279-m04_ha-923279-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-923279 node stop m02 -v=7 --alsologtostderr: (12.39164881s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-923279 status -v=7 --alsologtostderr: exit status 7 (760.625887ms)

                                                
                                                
-- stdout --
	ha-923279
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-923279-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-923279-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-923279-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 03:46:11.285594 3305545 out.go:291] Setting OutFile to fd 1 ...
	I0328 03:46:11.286679 3305545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:46:11.286696 3305545 out.go:304] Setting ErrFile to fd 2...
	I0328 03:46:11.286702 3305545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:46:11.287047 3305545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
	I0328 03:46:11.287269 3305545 out.go:298] Setting JSON to false
	I0328 03:46:11.287307 3305545 mustload.go:65] Loading cluster: ha-923279
	I0328 03:46:11.287711 3305545 notify.go:220] Checking for updates...
	I0328 03:46:11.288925 3305545 config.go:182] Loaded profile config "ha-923279": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 03:46:11.288987 3305545 status.go:255] checking status of ha-923279 ...
	I0328 03:46:11.291818 3305545 cli_runner.go:164] Run: docker container inspect ha-923279 --format={{.State.Status}}
	I0328 03:46:11.314135 3305545 status.go:330] ha-923279 host status = "Running" (err=<nil>)
	I0328 03:46:11.314161 3305545 host.go:66] Checking if "ha-923279" exists ...
	I0328 03:46:11.314471 3305545 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-923279
	I0328 03:46:11.332058 3305545 host.go:66] Checking if "ha-923279" exists ...
	I0328 03:46:11.332571 3305545 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 03:46:11.332633 3305545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-923279
	I0328 03:46:11.351471 3305545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36249 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/ha-923279/id_rsa Username:docker}
	I0328 03:46:11.449439 3305545 ssh_runner.go:195] Run: systemctl --version
	I0328 03:46:11.453705 3305545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 03:46:11.465968 3305545 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 03:46:11.526548 3305545 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:72 SystemTime:2024-03-28 03:46:11.517323138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 03:46:11.527122 3305545 kubeconfig.go:125] found "ha-923279" server: "https://192.168.49.254:8443"
	I0328 03:46:11.527147 3305545 api_server.go:166] Checking apiserver status ...
	I0328 03:46:11.527188 3305545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 03:46:11.538414 3305545 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1512/cgroup
	I0328 03:46:11.547589 3305545 api_server.go:182] apiserver freezer: "13:freezer:/docker/c5cd10b59ae20e20e6daf9918e59e3ed21c377de62665d4cd35651a0e89013fc/kubepods/burstable/pod2cdbde4e25608815b695496d7b7a277f/c2e5811d3cdd187b091b2b5c011d8c2516e9e42d76e88645f6ef6dc681750f8c"
	I0328 03:46:11.547665 3305545 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c5cd10b59ae20e20e6daf9918e59e3ed21c377de62665d4cd35651a0e89013fc/kubepods/burstable/pod2cdbde4e25608815b695496d7b7a277f/c2e5811d3cdd187b091b2b5c011d8c2516e9e42d76e88645f6ef6dc681750f8c/freezer.state
	I0328 03:46:11.556712 3305545 api_server.go:204] freezer state: "THAWED"
	I0328 03:46:11.556740 3305545 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0328 03:46:11.566554 3305545 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0328 03:46:11.566636 3305545 status.go:422] ha-923279 apiserver status = Running (err=<nil>)
	I0328 03:46:11.566663 3305545 status.go:257] ha-923279 status: &{Name:ha-923279 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 03:46:11.566710 3305545 status.go:255] checking status of ha-923279-m02 ...
	I0328 03:46:11.567070 3305545 cli_runner.go:164] Run: docker container inspect ha-923279-m02 --format={{.State.Status}}
	I0328 03:46:11.584294 3305545 status.go:330] ha-923279-m02 host status = "Stopped" (err=<nil>)
	I0328 03:46:11.584394 3305545 status.go:343] host is not running, skipping remaining checks
	I0328 03:46:11.584405 3305545 status.go:257] ha-923279-m02 status: &{Name:ha-923279-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 03:46:11.584428 3305545 status.go:255] checking status of ha-923279-m03 ...
	I0328 03:46:11.584770 3305545 cli_runner.go:164] Run: docker container inspect ha-923279-m03 --format={{.State.Status}}
	I0328 03:46:11.600644 3305545 status.go:330] ha-923279-m03 host status = "Running" (err=<nil>)
	I0328 03:46:11.600671 3305545 host.go:66] Checking if "ha-923279-m03" exists ...
	I0328 03:46:11.601019 3305545 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-923279-m03
	I0328 03:46:11.619791 3305545 host.go:66] Checking if "ha-923279-m03" exists ...
	I0328 03:46:11.620182 3305545 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 03:46:11.620234 3305545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-923279-m03
	I0328 03:46:11.636178 3305545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36259 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/ha-923279-m03/id_rsa Username:docker}
	I0328 03:46:11.733801 3305545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 03:46:11.745418 3305545 kubeconfig.go:125] found "ha-923279" server: "https://192.168.49.254:8443"
	I0328 03:46:11.745446 3305545 api_server.go:166] Checking apiserver status ...
	I0328 03:46:11.745493 3305545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 03:46:11.757542 3305545 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup
	I0328 03:46:11.766981 3305545 api_server.go:182] apiserver freezer: "13:freezer:/docker/5669acc323cf09716d3eeeb0d00ee11556773d7ab1455e90704517d65d372661/kubepods/burstable/podd5349e6c799f1469e504667bae872bf6/875ff31f34b38791a6f1b6df7a9b260691ba77c6272d879e18b43fd537e79cd2"
	I0328 03:46:11.767065 3305545 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5669acc323cf09716d3eeeb0d00ee11556773d7ab1455e90704517d65d372661/kubepods/burstable/podd5349e6c799f1469e504667bae872bf6/875ff31f34b38791a6f1b6df7a9b260691ba77c6272d879e18b43fd537e79cd2/freezer.state
	I0328 03:46:11.776844 3305545 api_server.go:204] freezer state: "THAWED"
	I0328 03:46:11.776923 3305545 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0328 03:46:11.786155 3305545 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0328 03:46:11.786185 3305545 status.go:422] ha-923279-m03 apiserver status = Running (err=<nil>)
	I0328 03:46:11.786196 3305545 status.go:257] ha-923279-m03 status: &{Name:ha-923279-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 03:46:11.786214 3305545 status.go:255] checking status of ha-923279-m04 ...
	I0328 03:46:11.786529 3305545 cli_runner.go:164] Run: docker container inspect ha-923279-m04 --format={{.State.Status}}
	I0328 03:46:11.813183 3305545 status.go:330] ha-923279-m04 host status = "Running" (err=<nil>)
	I0328 03:46:11.813211 3305545 host.go:66] Checking if "ha-923279-m04" exists ...
	I0328 03:46:11.813507 3305545 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-923279-m04
	I0328 03:46:11.831902 3305545 host.go:66] Checking if "ha-923279-m04" exists ...
	I0328 03:46:11.832209 3305545 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 03:46:11.832261 3305545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-923279-m04
	I0328 03:46:11.850698 3305545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36264 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/ha-923279-m04/id_rsa Username:docker}
	I0328 03:46:11.953197 3305545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 03:46:11.967154 3305545 status.go:257] ha-923279-m04 status: &{Name:ha-923279-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0328 03:46:12.149873 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (17.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-923279 node start m02 -v=7 --alsologtostderr: (16.702359103s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-923279 status -v=7 --alsologtostderr: (1.095970588s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (17.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.052066764s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (133.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-923279 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-923279 -v=7 --alsologtostderr
E0328 03:46:37.937893 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
E0328 03:46:37.943235 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
E0328 03:46:37.953561 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
E0328 03:46:37.973834 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
E0328 03:46:38.014489 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
E0328 03:46:38.094818 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
E0328 03:46:38.255142 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
E0328 03:46:38.575799 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
E0328 03:46:39.216237 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
E0328 03:46:40.496451 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
E0328 03:46:43.057528 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
E0328 03:46:48.178103 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
E0328 03:46:58.418828 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-923279 -v=7 --alsologtostderr: (37.31244582s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-923279 --wait=true -v=7 --alsologtostderr
E0328 03:47:18.899229 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
E0328 03:47:59.859938 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-923279 --wait=true -v=7 --alsologtostderr: (1m36.087607011s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-923279
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (133.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-923279 node delete m03 -v=7 --alsologtostderr: (10.576425521s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 stop -v=7 --alsologtostderr
E0328 03:49:21.780884 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-923279 stop -v=7 --alsologtostderr: (35.848182266s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-923279 status -v=7 --alsologtostderr: exit status 7 (113.738547ms)

                                                
                                                
-- stdout --
	ha-923279
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-923279-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-923279-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 03:49:33.083055 3319145 out.go:291] Setting OutFile to fd 1 ...
	I0328 03:49:33.083171 3319145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:49:33.083186 3319145 out.go:304] Setting ErrFile to fd 2...
	I0328 03:49:33.083195 3319145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 03:49:33.083438 3319145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
	I0328 03:49:33.083627 3319145 out.go:298] Setting JSON to false
	I0328 03:49:33.083659 3319145 mustload.go:65] Loading cluster: ha-923279
	I0328 03:49:33.083756 3319145 notify.go:220] Checking for updates...
	I0328 03:49:33.084073 3319145 config.go:182] Loaded profile config "ha-923279": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 03:49:33.084086 3319145 status.go:255] checking status of ha-923279 ...
	I0328 03:49:33.084935 3319145 cli_runner.go:164] Run: docker container inspect ha-923279 --format={{.State.Status}}
	I0328 03:49:33.100561 3319145 status.go:330] ha-923279 host status = "Stopped" (err=<nil>)
	I0328 03:49:33.100585 3319145 status.go:343] host is not running, skipping remaining checks
	I0328 03:49:33.100593 3319145 status.go:257] ha-923279 status: &{Name:ha-923279 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 03:49:33.100616 3319145 status.go:255] checking status of ha-923279-m02 ...
	I0328 03:49:33.100932 3319145 cli_runner.go:164] Run: docker container inspect ha-923279-m02 --format={{.State.Status}}
	I0328 03:49:33.116537 3319145 status.go:330] ha-923279-m02 host status = "Stopped" (err=<nil>)
	I0328 03:49:33.116563 3319145 status.go:343] host is not running, skipping remaining checks
	I0328 03:49:33.116570 3319145 status.go:257] ha-923279-m02 status: &{Name:ha-923279-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 03:49:33.116606 3319145 status.go:255] checking status of ha-923279-m04 ...
	I0328 03:49:33.116919 3319145 cli_runner.go:164] Run: docker container inspect ha-923279-m04 --format={{.State.Status}}
	I0328 03:49:33.140613 3319145 status.go:330] ha-923279-m04 host status = "Stopped" (err=<nil>)
	I0328 03:49:33.140647 3319145 status.go:343] host is not running, skipping remaining checks
	I0328 03:49:33.140655 3319145 status.go:257] ha-923279-m04 status: &{Name:ha-923279-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (78.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-923279 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0328 03:50:44.462598 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-923279 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.226154926s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (78.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-923279 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-923279 --control-plane -v=7 --alsologtostderr: (43.180521082s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-923279 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-923279 status -v=7 --alsologtostderr: (1.023889373s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (85.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-654862 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0328 03:52:05.621090 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-654862 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m25.802915568s)
--- PASS: TestJSONOutput/start/Command (85.81s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-654862 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-654862 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-654862 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-654862 --output=json --user=testUser: (5.74815181s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-497860 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-497860 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.944781ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bdf753b6-16b4-40d9-a7a7-9c74e9ec50fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-497860] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ed85e6b-7cdc-4160-a66d-79cf02e7a399","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18485"}}
	{"specversion":"1.0","id":"7ce9af6a-eb19-4789-b5da-1d2fc8c494f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f915a5e7-b905-4789-92c5-830419be7e05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig"}}
	{"specversion":"1.0","id":"3d1a834a-a766-41a4-9a64-0df040a65d0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube"}}
	{"specversion":"1.0","id":"0d62d08f-ef72-41cd-b5bc-0ed310717b5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a19a515b-5d9e-41ef-8c1a-1bbc90eaca64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e567f37f-ed00-45f0-b99f-1fb00f9ffd4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-497860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-497860
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-477930 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-477930 --network=: (38.091350521s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-477930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-477930
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-477930: (2.141581096s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.26s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-961845 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-961845 --network=bridge: (33.964077387s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-961845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-961845
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-961845: (1.962744619s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.95s)

                                                
                                    
x
+
TestKicExistingNetwork (38s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-432311 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-432311 --network=existing-network: (35.916304097s)
helpers_test.go:175: Cleaning up "existing-network-432311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-432311
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-432311: (1.965529247s)
--- PASS: TestKicExistingNetwork (38.00s)

                                                
                                    
x
+
TestKicCustomSubnet (35.21s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-292979 --subnet=192.168.60.0/24
E0328 03:55:44.462942 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-292979 --subnet=192.168.60.0/24: (32.948721707s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-292979 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-292979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-292979
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-292979: (2.236205683s)
--- PASS: TestKicCustomSubnet (35.21s)

                                                
                                    
x
+
TestKicStaticIP (35.74s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-882617 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-882617 --static-ip=192.168.200.200: (33.527522063s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-882617 ip
helpers_test.go:175: Cleaning up "static-ip-882617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-882617
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-882617: (2.052812489s)
--- PASS: TestKicStaticIP (35.74s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (67.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-580701 --driver=docker  --container-runtime=containerd
E0328 03:56:37.938067 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-580701 --driver=docker  --container-runtime=containerd: (28.884289564s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-583195 --driver=docker  --container-runtime=containerd
E0328 03:57:07.510085 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-583195 --driver=docker  --container-runtime=containerd: (32.853420613s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-580701
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-583195
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-583195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-583195
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-583195: (1.941232691s)
helpers_test.go:175: Cleaning up "first-580701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-580701
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-580701: (2.229257806s)
--- PASS: TestMinikubeProfile (67.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-892960 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-892960 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.039717579s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-892960 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-906441 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-906441 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.018231821s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-906441 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-892960 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-892960 --alsologtostderr -v=5: (1.591912481s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-906441 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-906441
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-906441: (1.222208663s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.54s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-906441
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-906441: (6.537314823s)
--- PASS: TestMountStart/serial/RestartStopped (7.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-906441 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (97.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-231632 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-231632 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m37.188994909s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (97.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (46.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-231632 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-231632 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-231632 -- rollout status deployment/busybox: (43.997133259s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-231632 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-231632 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-231632 -- exec busybox-7fdf7869d9-n8mk5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-231632 -- exec busybox-7fdf7869d9-v7257 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-231632 -- exec busybox-7fdf7869d9-n8mk5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-231632 -- exec busybox-7fdf7869d9-v7257 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-231632 -- exec busybox-7fdf7869d9-n8mk5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-231632 -- exec busybox-7fdf7869d9-v7257 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (46.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-231632 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-231632 -- exec busybox-7fdf7869d9-n8mk5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-231632 -- exec busybox-7fdf7869d9-n8mk5 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-231632 -- exec busybox-7fdf7869d9-v7257 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-231632 -- exec busybox-7fdf7869d9-v7257 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-231632 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-231632 -v 3 --alsologtostderr: (16.252258922s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.95s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-231632 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 cp testdata/cp-test.txt multinode-231632:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 cp multinode-231632:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1606114510/001/cp-test_multinode-231632.txt
E0328 04:00:44.462214 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 cp multinode-231632:/home/docker/cp-test.txt multinode-231632-m02:/home/docker/cp-test_multinode-231632_multinode-231632-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632-m02 "sudo cat /home/docker/cp-test_multinode-231632_multinode-231632-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 cp multinode-231632:/home/docker/cp-test.txt multinode-231632-m03:/home/docker/cp-test_multinode-231632_multinode-231632-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632-m03 "sudo cat /home/docker/cp-test_multinode-231632_multinode-231632-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 cp testdata/cp-test.txt multinode-231632-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 cp multinode-231632-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1606114510/001/cp-test_multinode-231632-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 cp multinode-231632-m02:/home/docker/cp-test.txt multinode-231632:/home/docker/cp-test_multinode-231632-m02_multinode-231632.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632 "sudo cat /home/docker/cp-test_multinode-231632-m02_multinode-231632.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 cp multinode-231632-m02:/home/docker/cp-test.txt multinode-231632-m03:/home/docker/cp-test_multinode-231632-m02_multinode-231632-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632-m03 "sudo cat /home/docker/cp-test_multinode-231632-m02_multinode-231632-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 cp testdata/cp-test.txt multinode-231632-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 cp multinode-231632-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1606114510/001/cp-test_multinode-231632-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 cp multinode-231632-m03:/home/docker/cp-test.txt multinode-231632:/home/docker/cp-test_multinode-231632-m03_multinode-231632.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632 "sudo cat /home/docker/cp-test_multinode-231632-m03_multinode-231632.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 cp multinode-231632-m03:/home/docker/cp-test.txt multinode-231632-m02:/home/docker/cp-test_multinode-231632-m03_multinode-231632-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 ssh -n multinode-231632-m02 "sudo cat /home/docker/cp-test_multinode-231632-m03_multinode-231632-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-231632 node stop m03: (1.234660803s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-231632 status: exit status 7 (540.921722ms)

                                                
                                                
-- stdout --
	multinode-231632
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-231632-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-231632-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-231632 status --alsologtostderr: exit status 7 (525.435815ms)

                                                
                                                
-- stdout --
	multinode-231632
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-231632-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-231632-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 04:00:55.209826 3371077 out.go:291] Setting OutFile to fd 1 ...
	I0328 04:00:55.209997 3371077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 04:00:55.210011 3371077 out.go:304] Setting ErrFile to fd 2...
	I0328 04:00:55.210017 3371077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 04:00:55.210309 3371077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
	I0328 04:00:55.210533 3371077 out.go:298] Setting JSON to false
	I0328 04:00:55.210586 3371077 mustload.go:65] Loading cluster: multinode-231632
	I0328 04:00:55.210642 3371077 notify.go:220] Checking for updates...
	I0328 04:00:55.211052 3371077 config.go:182] Loaded profile config "multinode-231632": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 04:00:55.211074 3371077 status.go:255] checking status of multinode-231632 ...
	I0328 04:00:55.211636 3371077 cli_runner.go:164] Run: docker container inspect multinode-231632 --format={{.State.Status}}
	I0328 04:00:55.230312 3371077 status.go:330] multinode-231632 host status = "Running" (err=<nil>)
	I0328 04:00:55.230341 3371077 host.go:66] Checking if "multinode-231632" exists ...
	I0328 04:00:55.230662 3371077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-231632
	I0328 04:00:55.246501 3371077 host.go:66] Checking if "multinode-231632" exists ...
	I0328 04:00:55.246828 3371077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 04:00:55.246886 3371077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-231632
	I0328 04:00:55.275433 3371077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36369 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/multinode-231632/id_rsa Username:docker}
	I0328 04:00:55.374138 3371077 ssh_runner.go:195] Run: systemctl --version
	I0328 04:00:55.378511 3371077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 04:00:55.391335 3371077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 04:00:55.450381 3371077 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-03-28 04:00:55.440802194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 04:00:55.450987 3371077 kubeconfig.go:125] found "multinode-231632" server: "https://192.168.67.2:8443"
	I0328 04:00:55.451014 3371077 api_server.go:166] Checking apiserver status ...
	I0328 04:00:55.451061 3371077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 04:00:55.463131 3371077 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1416/cgroup
	I0328 04:00:55.473122 3371077 api_server.go:182] apiserver freezer: "13:freezer:/docker/fc5b657d9fe9343f448d6e1e0a280f9f2698f4be65fd9a52e090ae0d9e24b4a4/kubepods/burstable/pod7a8015573ce5eb8517e022c728a2cb71/ed1d87289328ceca3a470a46030ef00637c40101b1390acf3db8c6ad85adfaf5"
	I0328 04:00:55.473205 3371077 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fc5b657d9fe9343f448d6e1e0a280f9f2698f4be65fd9a52e090ae0d9e24b4a4/kubepods/burstable/pod7a8015573ce5eb8517e022c728a2cb71/ed1d87289328ceca3a470a46030ef00637c40101b1390acf3db8c6ad85adfaf5/freezer.state
	I0328 04:00:55.482465 3371077 api_server.go:204] freezer state: "THAWED"
	I0328 04:00:55.482502 3371077 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0328 04:00:55.490521 3371077 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0328 04:00:55.490557 3371077 status.go:422] multinode-231632 apiserver status = Running (err=<nil>)
	I0328 04:00:55.490596 3371077 status.go:257] multinode-231632 status: &{Name:multinode-231632 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 04:00:55.490630 3371077 status.go:255] checking status of multinode-231632-m02 ...
	I0328 04:00:55.491020 3371077 cli_runner.go:164] Run: docker container inspect multinode-231632-m02 --format={{.State.Status}}
	I0328 04:00:55.509100 3371077 status.go:330] multinode-231632-m02 host status = "Running" (err=<nil>)
	I0328 04:00:55.509126 3371077 host.go:66] Checking if "multinode-231632-m02" exists ...
	I0328 04:00:55.509456 3371077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-231632-m02
	I0328 04:00:55.525199 3371077 host.go:66] Checking if "multinode-231632-m02" exists ...
	I0328 04:00:55.525521 3371077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 04:00:55.525582 3371077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-231632-m02
	I0328 04:00:55.541345 3371077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36374 SSHKeyPath:/home/jenkins/minikube-integration/18485-3249988/.minikube/machines/multinode-231632-m02/id_rsa Username:docker}
	I0328 04:00:55.637363 3371077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 04:00:55.649335 3371077 status.go:257] multinode-231632-m02 status: &{Name:multinode-231632-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0328 04:00:55.649372 3371077 status.go:255] checking status of multinode-231632-m03 ...
	I0328 04:00:55.649692 3371077 cli_runner.go:164] Run: docker container inspect multinode-231632-m03 --format={{.State.Status}}
	I0328 04:00:55.665270 3371077 status.go:330] multinode-231632-m03 host status = "Stopped" (err=<nil>)
	I0328 04:00:55.665294 3371077 status.go:343] host is not running, skipping remaining checks
	I0328 04:00:55.665303 3371077 status.go:257] multinode-231632-m03 status: &{Name:multinode-231632-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-231632 node start m03 -v=7 --alsologtostderr: (9.054471925s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (84.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-231632
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-231632
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-231632: (25.018819478s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-231632 --wait=true -v=8 --alsologtostderr
E0328 04:01:37.937363 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-231632 --wait=true -v=8 --alsologtostderr: (59.567036708s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-231632
--- PASS: TestMultiNode/serial/RestartKeepsNodes (84.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-231632 node delete m03: (4.74257238s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-231632 stop: (23.807947553s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-231632 status: exit status 7 (88.134651ms)

                                                
                                                
-- stdout --
	multinode-231632
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-231632-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-231632 status --alsologtostderr: exit status 7 (93.756214ms)

                                                
                                                
-- stdout --
	multinode-231632
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-231632-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 04:02:59.619913 3378757 out.go:291] Setting OutFile to fd 1 ...
	I0328 04:02:59.620070 3378757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 04:02:59.620095 3378757 out.go:304] Setting ErrFile to fd 2...
	I0328 04:02:59.620111 3378757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 04:02:59.620409 3378757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
	I0328 04:02:59.620611 3378757 out.go:298] Setting JSON to false
	I0328 04:02:59.620674 3378757 mustload.go:65] Loading cluster: multinode-231632
	I0328 04:02:59.620719 3378757 notify.go:220] Checking for updates...
	I0328 04:02:59.621108 3378757 config.go:182] Loaded profile config "multinode-231632": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 04:02:59.621123 3378757 status.go:255] checking status of multinode-231632 ...
	I0328 04:02:59.621627 3378757 cli_runner.go:164] Run: docker container inspect multinode-231632 --format={{.State.Status}}
	I0328 04:02:59.638341 3378757 status.go:330] multinode-231632 host status = "Stopped" (err=<nil>)
	I0328 04:02:59.638361 3378757 status.go:343] host is not running, skipping remaining checks
	I0328 04:02:59.638368 3378757 status.go:257] multinode-231632 status: &{Name:multinode-231632 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 04:02:59.638395 3378757 status.go:255] checking status of multinode-231632-m02 ...
	I0328 04:02:59.638701 3378757 cli_runner.go:164] Run: docker container inspect multinode-231632-m02 --format={{.State.Status}}
	I0328 04:02:59.653954 3378757 status.go:330] multinode-231632-m02 host status = "Stopped" (err=<nil>)
	I0328 04:02:59.653979 3378757 status.go:343] host is not running, skipping remaining checks
	I0328 04:02:59.653988 3378757 status.go:257] multinode-231632-m02 status: &{Name:multinode-231632-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-231632 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0328 04:03:00.981682 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-231632 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (55.035096265s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-231632 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.72s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-231632
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-231632-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-231632-m02 --driver=docker  --container-runtime=containerd: exit status 14 (94.779471ms)

                                                
                                                
-- stdout --
	* [multinode-231632-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-231632-m02' is duplicated with machine name 'multinode-231632-m02' in profile 'multinode-231632'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-231632-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-231632-m03 --driver=docker  --container-runtime=containerd: (34.622967724s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-231632
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-231632: exit status 80 (333.298528ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-231632 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-231632-m03 already exists in multinode-231632-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-231632-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-231632-m03: (1.948306772s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.07s)

                                                
                                    
x
+
TestPreload (107.78s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-526769 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0328 04:05:44.462309 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-526769 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m9.476432918s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-526769 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-526769 image pull gcr.io/k8s-minikube/busybox: (1.357677606s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-526769
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-526769: (12.025959336s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-526769 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-526769 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (22.029437163s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-526769 image list
helpers_test.go:175: Cleaning up "test-preload-526769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-526769
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-526769: (2.498565299s)
--- PASS: TestPreload (107.78s)

                                                
                                    
x
+
TestScheduledStopUnix (106.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-103834 --memory=2048 --driver=docker  --container-runtime=containerd
E0328 04:06:37.937555 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-103834 --memory=2048 --driver=docker  --container-runtime=containerd: (31.049319281s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-103834 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-103834 -n scheduled-stop-103834
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-103834 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-103834 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-103834 -n scheduled-stop-103834
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-103834
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-103834 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-103834
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-103834: exit status 7 (68.465381ms)

                                                
                                                
-- stdout --
	scheduled-stop-103834
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-103834 -n scheduled-stop-103834
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-103834 -n scheduled-stop-103834: exit status 7 (70.977254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-103834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-103834
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-103834: (4.300867762s)
--- PASS: TestScheduledStopUnix (106.92s)

                                                
                                    
x
+
TestInsufficientStorage (10.02s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-852074 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-852074 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.564270639s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"687be540-03a7-405d-8ecd-2786d17be959","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-852074] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d8d90bc6-bf36-4503-a40d-58e696eaaa00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18485"}}
	{"specversion":"1.0","id":"8234ba48-885c-4ae3-b3be-d5c977a1f918","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cbe8cfa7-1822-4cfd-9b3c-511ad3d6c634","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig"}}
	{"specversion":"1.0","id":"6f490a6d-b9c2-4f09-8787-c016f9d06a6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube"}}
	{"specversion":"1.0","id":"28534e5b-71c0-47ac-83fd-f0373082e026","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0b1a9119-f8ec-4803-b3d4-60d74ec475ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1d6a11ad-a5a9-4ca1-add8-0112bc6e7af6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"36dfed8e-ad9e-45d8-a329-66bdd13cecef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"825e4534-5a20-48ec-8e3b-12304c14fc5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c32603f-ec67-42ae-a693-c59578fa3e62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"59ae23c0-cb8e-410b-b9c9-1fa799436eef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-852074\" primary control-plane node in \"insufficient-storage-852074\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a22f486a-5c21-497a-a0aa-95f5cad2a07f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1711559786-18485 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"236b10c2-24f0-4162-820f-b690a167c9e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6cc8882-2655-4dfa-81db-40e02c5aedbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-852074 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-852074 --output=json --layout=cluster: exit status 7 (281.972971ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-852074","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-852074","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 04:08:19.038773 3396301 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-852074" does not appear in /home/jenkins/minikube-integration/18485-3249988/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-852074 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-852074 --output=json --layout=cluster: exit status 7 (286.692553ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-852074","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-852074","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 04:08:19.324479 3396357 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-852074" does not appear in /home/jenkins/minikube-integration/18485-3249988/kubeconfig
	E0328 04:08:19.334623 3396357 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/insufficient-storage-852074/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-852074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-852074
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-852074: (1.882360667s)
--- PASS: TestInsufficientStorage (10.02s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (87.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1897150529 start -p running-upgrade-475525 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1897150529 start -p running-upgrade-475525 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.1127461s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-475525 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0328 04:13:47.510274 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-475525 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.246342671s)
helpers_test.go:175: Cleaning up "running-upgrade-475525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-475525
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-475525: (2.44920972s)
--- PASS: TestRunningBinaryUpgrade (87.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (378.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-723039 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-723039 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.943989976s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-723039
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-723039: (1.350926785s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-723039 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-723039 status --format={{.Host}}: exit status 7 (103.381222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-723039 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0328 04:10:44.466400 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-723039 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m57.017656573s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-723039 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-723039 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-723039 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (98.644051ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-723039] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-723039
	    minikube start -p kubernetes-upgrade-723039 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7230392 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-723039 --kubernetes-version=v1.30.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-723039 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0328 04:15:44.461992 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-723039 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (16.858498274s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-723039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-723039
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-723039: (2.504195915s)
--- PASS: TestKubernetesUpgrade (378.04s)

                                                
                                    
x
+
TestMissingContainerUpgrade (158.23s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1259902506 start -p missing-upgrade-110609 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1259902506 start -p missing-upgrade-110609 --memory=2200 --driver=docker  --container-runtime=containerd: (1m26.196351711s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-110609
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-110609: (1.867004776s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-110609
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-110609 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-110609 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.428799053s)
helpers_test.go:175: Cleaning up "missing-upgrade-110609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-110609
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-110609: (4.448971591s)
--- PASS: TestMissingContainerUpgrade (158.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-159535 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-159535 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (90.921802ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-159535] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-159535 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-159535 --driver=docker  --container-runtime=containerd: (43.147142312s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-159535 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-159535 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-159535 --no-kubernetes --driver=docker  --container-runtime=containerd: (13.999731114s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-159535 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-159535 status -o json: exit status 2 (295.748055ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-159535","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-159535
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-159535: (1.960163912s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-159535 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-159535 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.322526295s)
--- PASS: TestNoKubernetes/serial/Start (6.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-159535 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-159535 "sudo systemctl is-active --quiet service kubelet": exit status 1 (362.127069ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-159535
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-159535: (1.322199982s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-159535 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-159535 --driver=docker  --container-runtime=containerd: (8.472496733s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-159535 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-159535 "sudo systemctl is-active --quiet service kubelet": exit status 1 (381.270474ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (108.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3040067645 start -p stopped-upgrade-946327 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0328 04:11:37.938268 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3040067645 start -p stopped-upgrade-946327 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.658567144s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3040067645 -p stopped-upgrade-946327 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3040067645 -p stopped-upgrade-946327 stop: (19.917240586s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-946327 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-946327 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.303233625s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (108.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-946327
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-946327: (1.098575422s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    
x
+
TestPause/serial/Start (88.34s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-922033 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-922033 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m28.335832099s)
--- PASS: TestPause/serial/Start (88.34s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-922033 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-922033 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.001167786s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.01s)

                                                
                                    
x
+
TestPause/serial/Pause (1.24s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-922033 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-922033 --alsologtostderr -v=5: (1.239476465s)
--- PASS: TestPause/serial/Pause (1.24s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-922033 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-922033 --output=json --layout=cluster: exit status 2 (406.407425ms)

                                                
                                                
-- stdout --
	{"Name":"pause-922033","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-922033","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.09s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-922033 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-922033 --alsologtostderr -v=5: (1.088058746s)
--- PASS: TestPause/serial/Unpause (1.09s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.24s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-922033 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-922033 --alsologtostderr -v=5: (1.243828951s)
--- PASS: TestPause/serial/PauseAgain (1.24s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.66s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-922033 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-922033 --alsologtostderr -v=5: (4.65950338s)
--- PASS: TestPause/serial/DeletePaused (4.66s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.68s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-922033
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-922033: exit status 1 (46.397112ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-922033: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-406050 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-406050 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (228.677639ms)

                                                
                                                
-- stdout --
	* [false-406050] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 04:16:12.356006 3436282 out.go:291] Setting OutFile to fd 1 ...
	I0328 04:16:12.356210 3436282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 04:16:12.356223 3436282 out.go:304] Setting ErrFile to fd 2...
	I0328 04:16:12.356230 3436282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 04:16:12.356552 3436282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-3249988/.minikube/bin
	I0328 04:16:12.356996 3436282 out.go:298] Setting JSON to false
	I0328 04:16:12.358037 3436282 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":43110,"bootTime":1711556262,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0328 04:16:12.358119 3436282 start.go:139] virtualization:  
	I0328 04:16:12.363014 3436282 out.go:177] * [false-406050] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 04:16:12.365712 3436282 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 04:16:12.367456 3436282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 04:16:12.365774 3436282 notify.go:220] Checking for updates...
	I0328 04:16:12.369909 3436282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-3249988/kubeconfig
	I0328 04:16:12.372007 3436282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-3249988/.minikube
	I0328 04:16:12.374370 3436282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 04:16:12.377405 3436282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 04:16:12.379900 3436282 config.go:182] Loaded profile config "force-systemd-flag-034881": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0328 04:16:12.379994 3436282 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 04:16:12.398639 3436282 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 04:16:12.398768 3436282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 04:16:12.503908 3436282 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-28 04:16:12.490985786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215109632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 04:16:12.504098 3436282 docker.go:295] overlay module found
	I0328 04:16:12.506593 3436282 out.go:177] * Using the docker driver based on user configuration
	I0328 04:16:12.508450 3436282 start.go:297] selected driver: docker
	I0328 04:16:12.508509 3436282 start.go:901] validating driver "docker" against <nil>
	I0328 04:16:12.508538 3436282 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 04:16:12.511018 3436282 out.go:177] 
	W0328 04:16:12.513113 3436282 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0328 04:16:12.514858 3436282 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-406050 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-406050

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-406050

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-406050

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-406050

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-406050

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-406050

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-406050

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-406050

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-406050

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-406050

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-406050

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-406050" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-406050" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-406050

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-406050"

                                                
                                                
----------------------- debugLogs end: false-406050 [took: 4.736306048s] --------------------------------
helpers_test.go:175: Cleaning up "false-406050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-406050
--- PASS: TestNetworkPlugins/group/false (5.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (152.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-140381 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0328 04:19:40.982470 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-140381 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m32.031388699s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (152.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-140381 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [713c95bc-bc6f-4168-a309-7ca6e03cedd3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [713c95bc-bc6f-4168-a309-7ca6e03cedd3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004233173s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-140381 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-140381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-140381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.37995255s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-140381 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-697565 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-697565 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m38.009442162s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-140381 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-140381 --alsologtostderr -v=3: (12.377063009s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-140381 -n old-k8s-version-140381
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-140381 -n old-k8s-version-140381: exit status 7 (132.10647ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-140381 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-697565 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1a1c31ae-4b2b-41b3-9a0e-f3951f13d89d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1a1c31ae-4b2b-41b3-9a0e-f3951f13d89d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003445021s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-697565 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-697565 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-697565 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.044295946s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-697565 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-697565 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-697565 --alsologtostderr -v=3: (12.107428476s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-697565 -n default-k8s-diff-port-697565
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-697565 -n default-k8s-diff-port-697565: exit status 7 (83.269862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-697565 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (277.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-697565 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
E0328 04:25:44.462804 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 04:26:37.937965 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-697565 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m36.712930909s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-697565 -n default-k8s-diff-port-697565
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (277.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-2kmz9" [ae50a1f5-5a5e-4692-bf6f-996ad5b6930a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004728446s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zcmb4" [069e0770-08f7-4eb7-a41d-48bdf4cb552b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003507841s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-2kmz9" [ae50a1f5-5a5e-4692-bf6f-996ad5b6930a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004649761s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-140381 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zcmb4" [069e0770-08f7-4eb7-a41d-48bdf4cb552b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004100391s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-697565 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-140381 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-140381 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-140381 -n old-k8s-version-140381
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-140381 -n old-k8s-version-140381: exit status 2 (325.246928ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-140381 -n old-k8s-version-140381
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-140381 -n old-k8s-version-140381: exit status 2 (329.083204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-140381 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-140381 -n old-k8s-version-140381
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-140381 -n old-k8s-version-140381
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-697565 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-697565 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-697565 -n default-k8s-diff-port-697565
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-697565 -n default-k8s-diff-port-697565: exit status 2 (470.517578ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-697565 -n default-k8s-diff-port-697565
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-697565 -n default-k8s-diff-port-697565: exit status 2 (366.789781ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-697565 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-697565 --alsologtostderr -v=1: (1.042319614s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-697565 -n default-k8s-diff-port-697565
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-697565 -n default-k8s-diff-port-697565
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (92.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-804280 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-804280 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m32.133964869s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (92.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-860536 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-860536 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (1m14.113078903s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-860536 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [07ab0887-2560-44d1-9086-5af8a5febc21] Pending
helpers_test.go:344: "busybox" [07ab0887-2560-44d1-9086-5af8a5febc21] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [07ab0887-2560-44d1-9086-5af8a5febc21] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004215059s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-860536 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-860536 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-860536 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-860536 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-860536 --alsologtostderr -v=3: (12.121637742s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-804280 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d10b0a45-995c-43a3-9bd0-db33f885092e] Pending
helpers_test.go:344: "busybox" [d10b0a45-995c-43a3-9bd0-db33f885092e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d10b0a45-995c-43a3-9bd0-db33f885092e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004279833s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-804280 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-860536 -n no-preload-860536
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-860536 -n no-preload-860536: exit status 7 (83.159575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-860536 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (265.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-860536 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-860536 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (4m25.358894299s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-860536 -n no-preload-860536
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (265.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-804280 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-804280 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.160536285s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-804280 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-804280 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-804280 --alsologtostderr -v=3: (12.101509398s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-804280 -n embed-certs-804280
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-804280 -n embed-certs-804280: exit status 7 (137.843432ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-804280 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-804280 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
E0328 04:30:19.606659 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
E0328 04:30:19.612113 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
E0328 04:30:19.622418 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
E0328 04:30:19.642804 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
E0328 04:30:19.683085 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
E0328 04:30:19.763482 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
E0328 04:30:19.923748 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
E0328 04:30:20.244404 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
E0328 04:30:20.884878 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
E0328 04:30:22.165458 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
E0328 04:30:24.726018 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
E0328 04:30:27.511361 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 04:30:29.846601 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
E0328 04:30:40.086847 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
E0328 04:30:44.462600 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 04:31:00.568041 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
E0328 04:31:37.937867 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
E0328 04:31:41.528489 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
E0328 04:32:07.057271 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
E0328 04:32:07.062543 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
E0328 04:32:07.072769 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
E0328 04:32:07.093034 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
E0328 04:32:07.133307 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
E0328 04:32:07.213613 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
E0328 04:32:07.373863 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
E0328 04:32:07.694359 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
E0328 04:32:08.335451 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
E0328 04:32:09.615913 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
E0328 04:32:12.176444 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
E0328 04:32:17.297253 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
E0328 04:32:27.538388 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
E0328 04:32:48.019498 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
E0328 04:33:03.449303 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-804280 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m27.039053591s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-804280 -n embed-certs-804280
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-87ktd" [0ee1b7fe-c883-4e31-ac65-b5b825fee0ee] Running
E0328 04:33:28.980154 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003192353s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-87ktd" [0ee1b7fe-c883-4e31-ac65-b5b825fee0ee] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005045962s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-860536 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-860536 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-860536 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-860536 --alsologtostderr -v=1: (1.241285994s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-860536 -n no-preload-860536
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-860536 -n no-preload-860536: exit status 2 (388.702361ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-860536 -n no-preload-860536
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-860536 -n no-preload-860536: exit status 2 (367.17695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-860536 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-860536 -n no-preload-860536
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-860536 -n no-preload-860536
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fh8cn" [69ead81a-42cc-4fd2-9fa5-ac11deaba6ae] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004075966s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (54.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-396609 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-396609 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (54.172556241s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (54.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fh8cn" [69ead81a-42cc-4fd2-9fa5-ac11deaba6ae] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004373278s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-804280 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-804280 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-804280 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-804280 -n embed-certs-804280
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-804280 -n embed-certs-804280: exit status 2 (394.766997ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-804280 -n embed-certs-804280
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-804280 -n embed-certs-804280: exit status 2 (395.994384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-804280 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-804280 -n embed-certs-804280
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-804280 -n embed-certs-804280
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-406050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-406050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m28.002365104s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-396609 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-396609 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.201229635s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-396609 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-396609 --alsologtostderr -v=3: (1.274288335s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-396609 -n newest-cni-396609
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-396609 -n newest-cni-396609: exit status 7 (84.332981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-396609 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-396609 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
E0328 04:34:50.901840 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-396609 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (16.911036335s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-396609 -n newest-cni-396609
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-396609 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-396609 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-396609 -n newest-cni-396609
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-396609 -n newest-cni-396609: exit status 2 (429.749504ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-396609 -n newest-cni-396609
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-396609 -n newest-cni-396609: exit status 2 (384.79365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-396609 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-396609 -n newest-cni-396609
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-396609 -n newest-cni-396609
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.50s)
E0328 04:40:44.462906 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/addons-340351/client.crt: no such file or directory
E0328 04:40:51.813135 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/auto-406050/client.crt: no such file or directory
E0328 04:41:12.293744 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/auto-406050/client.crt: no such file or directory
E0328 04:41:24.085260 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/no-preload-860536/client.crt: no such file or directory
E0328 04:41:36.940717 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/kindnet-406050/client.crt: no such file or directory
E0328 04:41:36.946152 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/kindnet-406050/client.crt: no such file or directory
E0328 04:41:36.956510 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/kindnet-406050/client.crt: no such file or directory
E0328 04:41:36.976820 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/kindnet-406050/client.crt: no such file or directory
E0328 04:41:37.017229 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/kindnet-406050/client.crt: no such file or directory
E0328 04:41:37.097805 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/kindnet-406050/client.crt: no such file or directory
E0328 04:41:37.258292 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/kindnet-406050/client.crt: no such file or directory
E0328 04:41:37.578772 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/kindnet-406050/client.crt: no such file or directory
E0328 04:41:37.937308 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
E0328 04:41:38.219681 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/kindnet-406050/client.crt: no such file or directory
E0328 04:41:39.499928 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/kindnet-406050/client.crt: no such file or directory
E0328 04:41:42.060916 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/kindnet-406050/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-406050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0328 04:35:19.606959 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/old-k8s-version-140381/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-406050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m31.714166557s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-406050 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-406050 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6jdqv" [ec594090-aec5-4bd7-84a7-34bed119177d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6jdqv" [ec594090-aec5-4bd7-84a7-34bed119177d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003696611s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-406050 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-406050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-406050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-406050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0328 04:36:20.982699 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-406050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m17.063885231s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nptkv" [72bab68c-781d-4446-9054-0399db96b9bc] Running
E0328 04:36:37.937626 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/functional-376731/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004980009s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-406050 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-406050 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kdq8m" [66057b27-661b-418f-beee-c3635131ec82] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kdq8m" [66057b27-661b-418f-beee-c3635131ec82] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.006008756s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-406050 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-406050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-406050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-406050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-406050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m7.070374521s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-bjlw6" [db23aaee-2730-46c3-a3a5-fb550da014aa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.008425079s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-406050 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-406050 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ngmrr" [62e34c00-b66e-4375-9742-de30f47eea6c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ngmrr" [62e34c00-b66e-4375-9742-de30f47eea6c] Running
E0328 04:37:34.742012 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/default-k8s-diff-port-697565/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005674127s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-406050 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-406050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-406050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-406050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-406050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m29.657566729s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-406050 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-406050 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-487xc" [5b96ec7a-40f0-454b-be08-4d9cb3ccff09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-487xc" [5b96ec7a-40f0-454b-be08-4d9cb3ccff09] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004221272s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-406050 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-406050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-406050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-406050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0328 04:39:00.724141 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/no-preload-860536/client.crt: no such file or directory
E0328 04:39:21.204498 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/no-preload-860536/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-406050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m0.887329115s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-406050 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-406050 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-w5ld7" [3359727a-c6f5-43a3-b48e-424b0132e53f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-w5ld7" [3359727a-c6f5-43a3-b48e-424b0132e53f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005691238s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-406050 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-406050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-406050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bjfdj" [54cc3681-730f-4589-a7c4-0f8bfb5f7838] Running
E0328 04:40:02.165012 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/no-preload-860536/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00439634s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-406050 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-406050 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6bz9d" [db8e5e47-16ec-4d08-8e47-c9dd27aedbb3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6bz9d" [db8e5e47-16ec-4d08-8e47-c9dd27aedbb3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004121518s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (93.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-406050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-406050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m33.234847313s)
--- PASS: TestNetworkPlugins/group/bridge/Start (93.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-406050 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-406050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-406050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-406050 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-406050 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s7txk" [9d6a3518-9ad4-413a-8678-4933db6ffca3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s7txk" [9d6a3518-9ad4-413a-8678-4933db6ffca3] Running
E0328 04:41:47.182213 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/kindnet-406050/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003808169s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-406050 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-406050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-406050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0328 04:41:53.254555 3255398 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-3249988/.minikube/profiles/auto-406050/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (31/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-513448 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-513448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-513448
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-518682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-518682
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-406050 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-406050

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-406050

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-406050

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-406050

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-406050

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-406050

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-406050

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-406050

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-406050

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-406050

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-406050

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-406050" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-406050" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-406050

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-406050"

                                                
                                                
----------------------- debugLogs end: kubenet-406050 [took: 4.919352475s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-406050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-406050
--- SKIP: TestNetworkPlugins/group/kubenet (5.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-406050 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-406050" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-406050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-406050" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-406050

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-406050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-406050"

                                                
                                                
----------------------- debugLogs end: cilium-406050 [took: 5.050373227s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-406050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-406050
--- SKIP: TestNetworkPlugins/group/cilium (5.22s)

                                                
                                    
Copied to clipboard