Test Report: Docker_Linux_containerd_arm64 18213

                    
                      d7784bd4e07917c4cb201a553088c10d6998a83a:2024-03-15:33580
                    
                

Test fail (7/335)

x
+
TestAddons/parallel/Ingress (38.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-639618 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-639618 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-639618 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [209a4373-87f4-4fd6-8b04-e8c5094201f9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [209a4373-87f4-4fd6-8b04-e8c5094201f9] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004388132s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-639618 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-639618 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-639618 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.064324337s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-639618 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-639618 addons disable ingress-dns --alsologtostderr -v=1: (1.467316149s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-639618 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-639618 addons disable ingress --alsologtostderr -v=1: (7.828026332s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-639618
helpers_test.go:235: (dbg) docker inspect addons-639618:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "009815b2cb973203de99923664d8056d90d8f29062090e7e6f10bb65c7e86ff7",
	        "Created": "2024-03-15T07:01:40.602574146Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3301834,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-15T07:01:40.900447251Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:db62270b4bb0cfcde696782f7a6322baca275275e31814ce9fd8998407bf461e",
	        "ResolvConfPath": "/var/lib/docker/containers/009815b2cb973203de99923664d8056d90d8f29062090e7e6f10bb65c7e86ff7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/009815b2cb973203de99923664d8056d90d8f29062090e7e6f10bb65c7e86ff7/hostname",
	        "HostsPath": "/var/lib/docker/containers/009815b2cb973203de99923664d8056d90d8f29062090e7e6f10bb65c7e86ff7/hosts",
	        "LogPath": "/var/lib/docker/containers/009815b2cb973203de99923664d8056d90d8f29062090e7e6f10bb65c7e86ff7/009815b2cb973203de99923664d8056d90d8f29062090e7e6f10bb65c7e86ff7-json.log",
	        "Name": "/addons-639618",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-639618:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-639618",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7db55d33865f453d19a2921a67d39e5ae7f4bc9e40f046de097699e52c18f8b0-init/diff:/var/lib/docker/overlay2/81bfb75b66991fc99a81a39de84c7e82ece5b807050cd14d22a1050d39339cc7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7db55d33865f453d19a2921a67d39e5ae7f4bc9e40f046de097699e52c18f8b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7db55d33865f453d19a2921a67d39e5ae7f4bc9e40f046de097699e52c18f8b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7db55d33865f453d19a2921a67d39e5ae7f4bc9e40f046de097699e52c18f8b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-639618",
	                "Source": "/var/lib/docker/volumes/addons-639618/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-639618",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-639618",
	                "name.minikube.sigs.k8s.io": "addons-639618",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c6a3de92d01967bf653839439f1290f81121096e40845c7fe781f09edb83d51",
	            "SandboxKey": "/var/run/docker/netns/0c6a3de92d01",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36680"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36679"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36676"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36678"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36677"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-639618": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "009815b2cb97",
	                        "addons-639618"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "a1d47562b79303e2c8f4ed6a0590a9f466a716116686e129ec1286ba32ffc6d2",
	                    "EndpointID": "8ba1f953484ab991653b885a86bc04880ec856dfc7dccbc7c26f67f0f23bc033",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-639618",
	                        "009815b2cb97"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-639618 -n addons-639618
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-639618 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-639618 logs -n 25: (2.168517128s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-386280              | download-only-386280   | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC | 15 Mar 24 07:01 UTC |
	| start   | -o=json --download-only              | download-only-348072   | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC |                     |
	|         | -p download-only-348072              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC | 15 Mar 24 07:01 UTC |
	| delete  | -p download-only-348072              | download-only-348072   | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC | 15 Mar 24 07:01 UTC |
	| start   | -o=json --download-only              | download-only-815376   | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC |                     |
	|         | -p download-only-815376              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC | 15 Mar 24 07:01 UTC |
	| delete  | -p download-only-815376              | download-only-815376   | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC | 15 Mar 24 07:01 UTC |
	| delete  | -p download-only-386280              | download-only-386280   | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC | 15 Mar 24 07:01 UTC |
	| delete  | -p download-only-348072              | download-only-348072   | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC | 15 Mar 24 07:01 UTC |
	| delete  | -p download-only-815376              | download-only-815376   | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC | 15 Mar 24 07:01 UTC |
	| start   | --download-only -p                   | download-docker-230205 | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC |                     |
	|         | download-docker-230205               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-230205            | download-docker-230205 | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC | 15 Mar 24 07:01 UTC |
	| start   | --download-only -p                   | binary-mirror-391586   | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC |                     |
	|         | binary-mirror-391586                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34147               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-391586              | binary-mirror-391586   | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC | 15 Mar 24 07:01 UTC |
	| addons  | disable dashboard -p                 | addons-639618          | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC |                     |
	|         | addons-639618                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-639618          | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC |                     |
	|         | addons-639618                        |                        |         |         |                     |                     |
	| start   | -p addons-639618 --wait=true         | addons-639618          | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC | 15 Mar 24 07:03 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| ip      | addons-639618 ip                     | addons-639618          | jenkins | v1.32.0 | 15 Mar 24 07:03 UTC | 15 Mar 24 07:03 UTC |
	| addons  | addons-639618 addons disable         | addons-639618          | jenkins | v1.32.0 | 15 Mar 24 07:03 UTC | 15 Mar 24 07:03 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-639618 addons                 | addons-639618          | jenkins | v1.32.0 | 15 Mar 24 07:03 UTC | 15 Mar 24 07:03 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-639618          | jenkins | v1.32.0 | 15 Mar 24 07:03 UTC | 15 Mar 24 07:03 UTC |
	|         | addons-639618                        |                        |         |         |                     |                     |
	| ssh     | addons-639618 ssh curl -s            | addons-639618          | jenkins | v1.32.0 | 15 Mar 24 07:03 UTC | 15 Mar 24 07:03 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-639618 ip                     | addons-639618          | jenkins | v1.32.0 | 15 Mar 24 07:03 UTC | 15 Mar 24 07:03 UTC |
	| addons  | addons-639618 addons disable         | addons-639618          | jenkins | v1.32.0 | 15 Mar 24 07:04 UTC | 15 Mar 24 07:04 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-639618 addons disable         | addons-639618          | jenkins | v1.32.0 | 15 Mar 24 07:04 UTC | 15 Mar 24 07:04 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 07:01:15
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 07:01:15.966336 3301372 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:01:15.966540 3301372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:01:15.966553 3301372 out.go:304] Setting ErrFile to fd 2...
	I0315 07:01:15.966558 3301372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:01:15.966842 3301372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
	I0315 07:01:15.967328 3301372 out.go:298] Setting JSON to false
	I0315 07:01:15.968258 3301372 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":56620,"bootTime":1710429456,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0315 07:01:15.968376 3301372 start.go:139] virtualization:  
	I0315 07:01:15.971465 3301372 out.go:177] * [addons-639618] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0315 07:01:15.973678 3301372 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:01:15.975715 3301372 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:01:15.973826 3301372 notify.go:220] Checking for updates...
	I0315 07:01:15.980218 3301372 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	I0315 07:01:15.982474 3301372 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	I0315 07:01:15.985088 3301372 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0315 07:01:15.987202 3301372 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:01:15.989903 3301372 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:01:16.016004 3301372 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0315 07:01:16.016116 3301372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:01:16.071586 3301372 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-15 07:01:16.062450452 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:01:16.071694 3301372 docker.go:295] overlay module found
	I0315 07:01:16.074108 3301372 out.go:177] * Using the docker driver based on user configuration
	I0315 07:01:16.076113 3301372 start.go:297] selected driver: docker
	I0315 07:01:16.076136 3301372 start.go:901] validating driver "docker" against <nil>
	I0315 07:01:16.076152 3301372 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:01:16.076828 3301372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:01:16.131597 3301372 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-15 07:01:16.122788276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:01:16.131776 3301372 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 07:01:16.132013 3301372 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:01:16.134137 3301372 out.go:177] * Using Docker driver with root privileges
	I0315 07:01:16.136223 3301372 cni.go:84] Creating CNI manager for ""
	I0315 07:01:16.136248 3301372 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0315 07:01:16.136262 3301372 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0315 07:01:16.136356 3301372 start.go:340] cluster config:
	{Name:addons-639618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-639618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:01:16.139191 3301372 out.go:177] * Starting "addons-639618" primary control-plane node in "addons-639618" cluster
	I0315 07:01:16.141284 3301372 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0315 07:01:16.143209 3301372 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0315 07:01:16.145414 3301372 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0315 07:01:16.145444 3301372 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0315 07:01:16.145461 3301372 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0315 07:01:16.145474 3301372 cache.go:56] Caching tarball of preloaded images
	I0315 07:01:16.145563 3301372 preload.go:173] Found /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0315 07:01:16.145572 3301372 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0315 07:01:16.145946 3301372 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/config.json ...
	I0315 07:01:16.145978 3301372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/config.json: {Name:mk00992fac25846b4a41327043ae316a46dc6bc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:01:16.159684 3301372 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0315 07:01:16.159823 3301372 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0315 07:01:16.159851 3301372 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0315 07:01:16.159857 3301372 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0315 07:01:16.159868 3301372 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0315 07:01:16.159878 3301372 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f from local cache
	I0315 07:01:32.455883 3301372 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f from cached tarball
	I0315 07:01:32.455932 3301372 cache.go:194] Successfully downloaded all kic artifacts
	I0315 07:01:32.455961 3301372 start.go:360] acquireMachinesLock for addons-639618: {Name:mk108dfa992b2d0855d4b3edeb1c16b8f6a38d32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:01:32.456735 3301372 start.go:364] duration metric: took 751.712µs to acquireMachinesLock for "addons-639618"
	I0315 07:01:32.456783 3301372 start.go:93] Provisioning new machine with config: &{Name:addons-639618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-639618 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0315 07:01:32.456861 3301372 start.go:125] createHost starting for "" (driver="docker")
	I0315 07:01:32.459519 3301372 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0315 07:01:32.459744 3301372 start.go:159] libmachine.API.Create for "addons-639618" (driver="docker")
	I0315 07:01:32.459778 3301372 client.go:168] LocalClient.Create starting
	I0315 07:01:32.459921 3301372 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem
	I0315 07:01:32.724830 3301372 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/cert.pem
	I0315 07:01:33.551670 3301372 cli_runner.go:164] Run: docker network inspect addons-639618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0315 07:01:33.565990 3301372 cli_runner.go:211] docker network inspect addons-639618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0315 07:01:33.566067 3301372 network_create.go:281] running [docker network inspect addons-639618] to gather additional debugging logs...
	I0315 07:01:33.566087 3301372 cli_runner.go:164] Run: docker network inspect addons-639618
	W0315 07:01:33.581334 3301372 cli_runner.go:211] docker network inspect addons-639618 returned with exit code 1
	I0315 07:01:33.581366 3301372 network_create.go:284] error running [docker network inspect addons-639618]: docker network inspect addons-639618: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-639618 not found
	I0315 07:01:33.581380 3301372 network_create.go:286] output of [docker network inspect addons-639618]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-639618 not found
	
	** /stderr **
	I0315 07:01:33.581488 3301372 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0315 07:01:33.599290 3301372 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000534290}
	I0315 07:01:33.599333 3301372 network_create.go:124] attempt to create docker network addons-639618 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0315 07:01:33.599389 3301372 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-639618 addons-639618
	I0315 07:01:33.659034 3301372 network_create.go:108] docker network addons-639618 192.168.49.0/24 created
	I0315 07:01:33.659151 3301372 kic.go:121] calculated static IP "192.168.49.2" for the "addons-639618" container
	I0315 07:01:33.659229 3301372 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0315 07:01:33.673120 3301372 cli_runner.go:164] Run: docker volume create addons-639618 --label name.minikube.sigs.k8s.io=addons-639618 --label created_by.minikube.sigs.k8s.io=true
	I0315 07:01:33.689086 3301372 oci.go:103] Successfully created a docker volume addons-639618
	I0315 07:01:33.689175 3301372 cli_runner.go:164] Run: docker run --rm --name addons-639618-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-639618 --entrypoint /usr/bin/test -v addons-639618:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0315 07:01:36.355729 3301372 cli_runner.go:217] Completed: docker run --rm --name addons-639618-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-639618 --entrypoint /usr/bin/test -v addons-639618:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib: (2.666506432s)
	I0315 07:01:36.355764 3301372 oci.go:107] Successfully prepared a docker volume addons-639618
	I0315 07:01:36.355789 3301372 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0315 07:01:36.355808 3301372 kic.go:194] Starting extracting preloaded images to volume ...
	I0315 07:01:36.355923 3301372 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-639618:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0315 07:01:40.537045 3301372 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-639618:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir: (4.181069776s)
	I0315 07:01:40.537079 3301372 kic.go:203] duration metric: took 4.181267424s to extract preloaded images to volume ...
	W0315 07:01:40.537217 3301372 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0315 07:01:40.537324 3301372 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0315 07:01:40.588990 3301372 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-639618 --name addons-639618 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-639618 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-639618 --network addons-639618 --ip 192.168.49.2 --volume addons-639618:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f
	I0315 07:01:40.909540 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Running}}
	I0315 07:01:40.935343 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:01:40.963032 3301372 cli_runner.go:164] Run: docker exec addons-639618 stat /var/lib/dpkg/alternatives/iptables
	I0315 07:01:41.034342 3301372 oci.go:144] the created container "addons-639618" has a running status.
	I0315 07:01:41.034377 3301372 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa...
	I0315 07:01:41.593941 3301372 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0315 07:01:41.632320 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:01:41.653698 3301372 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0315 07:01:41.653719 3301372 kic_runner.go:114] Args: [docker exec --privileged addons-639618 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0315 07:01:41.722507 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:01:41.754038 3301372 machine.go:94] provisionDockerMachine start ...
	I0315 07:01:41.754130 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:01:41.774908 3301372 main.go:141] libmachine: Using SSH client type: native
	I0315 07:01:41.775275 3301372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36680 <nil> <nil>}
	I0315 07:01:41.775288 3301372 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:01:41.926562 3301372 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-639618
	
	I0315 07:01:41.926635 3301372 ubuntu.go:169] provisioning hostname "addons-639618"
	I0315 07:01:41.926739 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:01:41.954816 3301372 main.go:141] libmachine: Using SSH client type: native
	I0315 07:01:41.955056 3301372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36680 <nil> <nil>}
	I0315 07:01:41.955068 3301372 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-639618 && echo "addons-639618" | sudo tee /etc/hostname
	I0315 07:01:42.142152 3301372 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-639618
	
	I0315 07:01:42.142330 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:01:42.165785 3301372 main.go:141] libmachine: Using SSH client type: native
	I0315 07:01:42.166092 3301372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36680 <nil> <nil>}
	I0315 07:01:42.166111 3301372 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-639618' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-639618/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-639618' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:01:42.323864 3301372 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:01:42.323933 3301372 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18213-3295134/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-3295134/.minikube}
	I0315 07:01:42.323968 3301372 ubuntu.go:177] setting up certificates
	I0315 07:01:42.323979 3301372 provision.go:84] configureAuth start
	I0315 07:01:42.324048 3301372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-639618
	I0315 07:01:42.341990 3301372 provision.go:143] copyHostCerts
	I0315 07:01:42.342078 3301372 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.pem (1078 bytes)
	I0315 07:01:42.342205 3301372 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-3295134/.minikube/cert.pem (1123 bytes)
	I0315 07:01:42.342265 3301372 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-3295134/.minikube/key.pem (1679 bytes)
	I0315 07:01:42.342313 3301372 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca-key.pem org=jenkins.addons-639618 san=[127.0.0.1 192.168.49.2 addons-639618 localhost minikube]
	I0315 07:01:42.952926 3301372 provision.go:177] copyRemoteCerts
	I0315 07:01:42.953011 3301372 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:01:42.953054 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:01:42.968451 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:01:43.068261 3301372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:01:43.092957 3301372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0315 07:01:43.118133 3301372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 07:01:43.142866 3301372 provision.go:87] duration metric: took 818.872309ms to configureAuth
	I0315 07:01:43.142895 3301372 ubuntu.go:193] setting minikube options for container-runtime
	I0315 07:01:43.143146 3301372 config.go:182] Loaded profile config "addons-639618": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0315 07:01:43.143159 3301372 machine.go:97] duration metric: took 1.389104241s to provisionDockerMachine
	I0315 07:01:43.143167 3301372 client.go:171] duration metric: took 10.683380959s to LocalClient.Create
	I0315 07:01:43.143182 3301372 start.go:167] duration metric: took 10.68343873s to libmachine.API.Create "addons-639618"
	I0315 07:01:43.143189 3301372 start.go:293] postStartSetup for "addons-639618" (driver="docker")
	I0315 07:01:43.143198 3301372 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:01:43.143257 3301372 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:01:43.143297 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:01:43.158719 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:01:43.256013 3301372 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:01:43.258969 3301372 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0315 07:01:43.259092 3301372 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0315 07:01:43.259132 3301372 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0315 07:01:43.259145 3301372 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0315 07:01:43.259156 3301372 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-3295134/.minikube/addons for local assets ...
	I0315 07:01:43.259222 3301372 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-3295134/.minikube/files for local assets ...
	I0315 07:01:43.259250 3301372 start.go:296] duration metric: took 116.055039ms for postStartSetup
	I0315 07:01:43.259556 3301372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-639618
	I0315 07:01:43.274967 3301372 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/config.json ...
	I0315 07:01:43.275335 3301372 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 07:01:43.275384 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:01:43.291113 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:01:43.384691 3301372 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0315 07:01:43.389485 3301372 start.go:128] duration metric: took 10.932605641s to createHost
	I0315 07:01:43.389510 3301372 start.go:83] releasing machines lock for "addons-639618", held for 10.93275895s
	I0315 07:01:43.389582 3301372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-639618
	I0315 07:01:43.405509 3301372 ssh_runner.go:195] Run: cat /version.json
	I0315 07:01:43.405568 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:01:43.405818 3301372 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:01:43.405883 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:01:43.426911 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:01:43.426959 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:01:43.657096 3301372 ssh_runner.go:195] Run: systemctl --version
	I0315 07:01:43.661472 3301372 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0315 07:01:43.665765 3301372 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0315 07:01:43.691346 3301372 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0315 07:01:43.691425 3301372 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:01:43.721340 3301372 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0315 07:01:43.721370 3301372 start.go:494] detecting cgroup driver to use...
	I0315 07:01:43.721404 3301372 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0315 07:01:43.721453 3301372 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0315 07:01:43.733852 3301372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0315 07:01:43.745402 3301372 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:01:43.745488 3301372 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:01:43.759789 3301372 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:01:43.774429 3301372 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:01:43.857203 3301372 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:01:43.944246 3301372 docker.go:233] disabling docker service ...
	I0315 07:01:43.944369 3301372 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:01:43.965042 3301372 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:01:43.976896 3301372 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:01:44.064940 3301372 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:01:44.158856 3301372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:01:44.171563 3301372 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:01:44.190273 3301372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0315 07:01:44.200417 3301372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0315 07:01:44.210413 3301372 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0315 07:01:44.210521 3301372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0315 07:01:44.220804 3301372 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0315 07:01:44.230546 3301372 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0315 07:01:44.240580 3301372 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0315 07:01:44.250375 3301372 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:01:44.259283 3301372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0315 07:01:44.269421 3301372 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:01:44.278297 3301372 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:01:44.286866 3301372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:01:44.367868 3301372 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0315 07:01:44.494854 3301372 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0315 07:01:44.494936 3301372 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0315 07:01:44.498638 3301372 start.go:562] Will wait 60s for crictl version
	I0315 07:01:44.498716 3301372 ssh_runner.go:195] Run: which crictl
	I0315 07:01:44.502150 3301372 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:01:44.537215 3301372 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0315 07:01:44.537295 3301372 ssh_runner.go:195] Run: containerd --version
	I0315 07:01:44.559096 3301372 ssh_runner.go:195] Run: containerd --version
	I0315 07:01:44.583208 3301372 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
	I0315 07:01:44.585621 3301372 cli_runner.go:164] Run: docker network inspect addons-639618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0315 07:01:44.600281 3301372 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0315 07:01:44.603858 3301372 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:01:44.614244 3301372 kubeadm.go:877] updating cluster {Name:addons-639618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-639618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:01:44.614366 3301372 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0315 07:01:44.614426 3301372 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:01:44.655985 3301372 containerd.go:612] all images are preloaded for containerd runtime.
	I0315 07:01:44.656008 3301372 containerd.go:519] Images already preloaded, skipping extraction
	I0315 07:01:44.656069 3301372 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:01:44.693572 3301372 containerd.go:612] all images are preloaded for containerd runtime.
	I0315 07:01:44.693597 3301372 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:01:44.693607 3301372 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 containerd true true} ...
	I0315 07:01:44.693704 3301372 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-639618 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-639618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:01:44.693779 3301372 ssh_runner.go:195] Run: sudo crictl info
	I0315 07:01:44.729214 3301372 cni.go:84] Creating CNI manager for ""
	I0315 07:01:44.729241 3301372 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0315 07:01:44.729252 3301372 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:01:44.729275 3301372 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-639618 NodeName:addons-639618 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:01:44.729406 3301372 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-639618"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:01:44.729484 3301372 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:01:44.738480 3301372 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:01:44.738560 3301372 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:01:44.747457 3301372 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0315 07:01:44.765739 3301372 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:01:44.784577 3301372 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0315 07:01:44.802491 3301372 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0315 07:01:44.805931 3301372 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:01:44.816958 3301372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:01:44.912699 3301372 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:01:44.928357 3301372 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618 for IP: 192.168.49.2
	I0315 07:01:44.928420 3301372 certs.go:194] generating shared ca certs ...
	I0315 07:01:44.928450 3301372 certs.go:226] acquiring lock for ca certs: {Name:mk9abb58e338d3f021292a49b0c7ea22df42932a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:01:44.928634 3301372 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.key
	I0315 07:01:45.229156 3301372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.crt ...
	I0315 07:01:45.229199 3301372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.crt: {Name:mk6487086aab5b1406e4961d314c3b6bf2479fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:01:45.229810 3301372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.key ...
	I0315 07:01:45.229835 3301372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.key: {Name:mk2ff727dc4b5198e9e9cec510a8d2566b6e3406 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:01:45.231276 3301372 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/proxy-client-ca.key
	I0315 07:01:45.969088 3301372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-3295134/.minikube/proxy-client-ca.crt ...
	I0315 07:01:45.970113 3301372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/proxy-client-ca.crt: {Name:mkad3910b53670f6bb1b9f18ddc525eb29137218 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:01:45.970318 3301372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-3295134/.minikube/proxy-client-ca.key ...
	I0315 07:01:45.970327 3301372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/proxy-client-ca.key: {Name:mkb9f8a1dea7a31526a3e5acec3e0ad361601fc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:01:45.970415 3301372 certs.go:256] generating profile certs ...
	I0315 07:01:45.970469 3301372 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.key
	I0315 07:01:45.970481 3301372 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt with IP's: []
	I0315 07:01:46.259542 3301372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt ...
	I0315 07:01:46.259577 3301372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: {Name:mk6583d43a4dd1ce8b862141f54942b9f809088c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:01:46.260138 3301372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.key ...
	I0315 07:01:46.260157 3301372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.key: {Name:mkdaeb090e75ee31732a2041145178a84c7dd311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:01:46.260313 3301372 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/apiserver.key.19025e21
	I0315 07:01:46.260340 3301372 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/apiserver.crt.19025e21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0315 07:01:47.314327 3301372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/apiserver.crt.19025e21 ...
	I0315 07:01:47.314361 3301372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/apiserver.crt.19025e21: {Name:mk4be64587041c59bf9ae010dfc4655421bd039f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:01:47.314584 3301372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/apiserver.key.19025e21 ...
	I0315 07:01:47.314600 3301372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/apiserver.key.19025e21: {Name:mkf0992dc8b4cb9d214f0703e670e79ca6fd7dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:01:47.314698 3301372 certs.go:381] copying /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/apiserver.crt.19025e21 -> /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/apiserver.crt
	I0315 07:01:47.314789 3301372 certs.go:385] copying /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/apiserver.key.19025e21 -> /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/apiserver.key
	I0315 07:01:47.314847 3301372 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/proxy-client.key
	I0315 07:01:47.314868 3301372 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/proxy-client.crt with IP's: []
	I0315 07:01:47.798268 3301372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/proxy-client.crt ...
	I0315 07:01:47.798304 3301372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/proxy-client.crt: {Name:mke2c4dffa92a2e3276658b2920192b972762708 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:01:47.798902 3301372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/proxy-client.key ...
	I0315 07:01:47.798929 3301372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/proxy-client.key: {Name:mk507271df2378622df217807c7980d34ca10f85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:01:47.799528 3301372 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca-key.pem (1675 bytes)
	I0315 07:01:47.799577 3301372 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:01:47.799609 3301372 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:01:47.799637 3301372 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/key.pem (1679 bytes)
	I0315 07:01:47.800227 3301372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:01:47.825058 3301372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0315 07:01:47.850209 3301372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:01:47.877440 3301372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:01:47.904603 3301372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0315 07:01:47.928930 3301372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 07:01:47.952963 3301372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:01:47.978044 3301372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 07:01:48.004962 3301372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:01:48.035356 3301372 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:01:48.056896 3301372 ssh_runner.go:195] Run: openssl version
	I0315 07:01:48.063494 3301372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:01:48.074227 3301372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:01:48.078251 3301372 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 07:01 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:01:48.078333 3301372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:01:48.085688 3301372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:01:48.096023 3301372 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:01:48.099590 3301372 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 07:01:48.099643 3301372 kubeadm.go:391] StartCluster: {Name:addons-639618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-639618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:01:48.099724 3301372 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0315 07:01:48.099787 3301372 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:01:48.137781 3301372 cri.go:89] found id: ""
	I0315 07:01:48.137902 3301372 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 07:01:48.147483 3301372 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:01:48.156992 3301372 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0315 07:01:48.157086 3301372 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:01:48.166715 3301372 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:01:48.166734 3301372 kubeadm.go:156] found existing configuration files:
	
	I0315 07:01:48.166808 3301372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:01:48.176268 3301372 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:01:48.176334 3301372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:01:48.184805 3301372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:01:48.193823 3301372 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:01:48.193912 3301372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:01:48.203009 3301372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:01:48.212106 3301372 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:01:48.212195 3301372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:01:48.221146 3301372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:01:48.230087 3301372 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:01:48.230156 3301372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:01:48.238640 3301372 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0315 07:01:48.284879 3301372 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:01:48.285209 3301372 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:01:48.329954 3301372 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0315 07:01:48.330073 3301372 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0315 07:01:48.330143 3301372 kubeadm.go:309] OS: Linux
	I0315 07:01:48.330213 3301372 kubeadm.go:309] CGROUPS_CPU: enabled
	I0315 07:01:48.330291 3301372 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0315 07:01:48.330366 3301372 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0315 07:01:48.330441 3301372 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0315 07:01:48.330514 3301372 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0315 07:01:48.330592 3301372 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0315 07:01:48.330669 3301372 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0315 07:01:48.330748 3301372 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0315 07:01:48.330825 3301372 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0315 07:01:48.404588 3301372 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:01:48.404759 3301372 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:01:48.404896 3301372 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:01:48.628328 3301372 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:01:48.633051 3301372 out.go:204]   - Generating certificates and keys ...
	I0315 07:01:48.633264 3301372 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:01:48.633387 3301372 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:01:49.168577 3301372 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 07:01:49.938568 3301372 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0315 07:01:50.471373 3301372 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0315 07:01:50.844903 3301372 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0315 07:01:52.475531 3301372 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0315 07:01:52.475975 3301372 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-639618 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0315 07:01:52.977412 3301372 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0315 07:01:52.977696 3301372 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-639618 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0315 07:01:53.415842 3301372 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 07:01:53.836787 3301372 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 07:01:54.400018 3301372 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0315 07:01:54.400297 3301372 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:01:54.641137 3301372 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:01:55.164763 3301372 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:01:55.656885 3301372 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:01:56.005365 3301372 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:01:56.005475 3301372 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:01:56.008336 3301372 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:01:56.010964 3301372 out.go:204]   - Booting up control plane ...
	I0315 07:01:56.011118 3301372 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:01:56.011604 3301372 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:01:56.012879 3301372 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:01:56.023562 3301372 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:01:56.025420 3301372 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:01:56.025669 3301372 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:01:56.128383 3301372 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 07:02:03.630553 3301372 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.502200 seconds
	I0315 07:02:03.630678 3301372 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:02:03.646559 3301372 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:02:04.177553 3301372 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:02:04.177748 3301372 kubeadm.go:309] [mark-control-plane] Marking the node addons-639618 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:02:04.690837 3301372 kubeadm.go:309] [bootstrap-token] Using token: qwrbi1.97owmuh4y9tjym5m
	I0315 07:02:04.693652 3301372 out.go:204]   - Configuring RBAC rules ...
	I0315 07:02:04.693771 3301372 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:02:04.701719 3301372 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:02:04.710981 3301372 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:02:04.715412 3301372 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:02:04.721116 3301372 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:02:04.725369 3301372 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:02:04.740605 3301372 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:02:04.989833 3301372 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:02:05.111880 3301372 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:02:05.114508 3301372 kubeadm.go:309] 
	I0315 07:02:05.114585 3301372 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:02:05.114593 3301372 kubeadm.go:309] 
	I0315 07:02:05.114667 3301372 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:02:05.114672 3301372 kubeadm.go:309] 
	I0315 07:02:05.114697 3301372 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:02:05.114754 3301372 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:02:05.114802 3301372 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:02:05.114807 3301372 kubeadm.go:309] 
	I0315 07:02:05.114859 3301372 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:02:05.114864 3301372 kubeadm.go:309] 
	I0315 07:02:05.114909 3301372 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:02:05.114921 3301372 kubeadm.go:309] 
	I0315 07:02:05.114972 3301372 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:02:05.115044 3301372 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:02:05.115152 3301372 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:02:05.115159 3301372 kubeadm.go:309] 
	I0315 07:02:05.115397 3301372 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:02:05.115478 3301372 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:02:05.115483 3301372 kubeadm.go:309] 
	I0315 07:02:05.115563 3301372 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token qwrbi1.97owmuh4y9tjym5m \
	I0315 07:02:05.115662 3301372 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c1e97d56565bc0beab8ad4377b38bf3319ec6c746cc5fae6ed0032cea307c48a \
	I0315 07:02:05.115682 3301372 kubeadm.go:309] 	--control-plane 
	I0315 07:02:05.115686 3301372 kubeadm.go:309] 
	I0315 07:02:05.115767 3301372 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:02:05.115775 3301372 kubeadm.go:309] 
	I0315 07:02:05.115856 3301372 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token qwrbi1.97owmuh4y9tjym5m \
	I0315 07:02:05.115962 3301372 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c1e97d56565bc0beab8ad4377b38bf3319ec6c746cc5fae6ed0032cea307c48a 
	I0315 07:02:05.119785 3301372 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0315 07:02:05.119900 3301372 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:02:05.119928 3301372 cni.go:84] Creating CNI manager for ""
	I0315 07:02:05.119937 3301372 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0315 07:02:05.122849 3301372 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0315 07:02:05.125102 3301372 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0315 07:02:05.130289 3301372 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0315 07:02:05.130309 3301372 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0315 07:02:05.156123 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0315 07:02:06.137002 3301372 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:02:06.137153 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:06.137266 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-639618 minikube.k8s.io/updated_at=2024_03_15T07_02_06_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=addons-639618 minikube.k8s.io/primary=true
	I0315 07:02:06.338153 3301372 ops.go:34] apiserver oom_adj: -16
	I0315 07:02:06.338254 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:06.838927 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:07.338882 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:07.839333 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:08.338789 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:08.839131 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:09.339354 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:09.838424 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:10.338625 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:10.838778 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:11.339228 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:11.839053 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:12.338883 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:12.838923 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:13.338611 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:13.838431 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:14.339033 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:14.838403 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:15.339247 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:15.838589 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:16.339103 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:16.838997 3301372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:02:16.937227 3301372 kubeadm.go:1107] duration metric: took 10.8001347s to wait for elevateKubeSystemPrivileges
	W0315 07:02:16.937262 3301372 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 07:02:16.937270 3301372 kubeadm.go:393] duration metric: took 28.837632553s to StartCluster
	I0315 07:02:16.937285 3301372 settings.go:142] acquiring lock: {Name:mk9341f71218475f44486dc55acbce7236fa3a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:02:16.937880 3301372 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-3295134/kubeconfig
	I0315 07:02:16.938259 3301372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/kubeconfig: {Name:mka8a8bb165c8233f51f8705aa64be6997cc72a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:02:16.938449 3301372 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0315 07:02:16.940495 3301372 out.go:177] * Verifying Kubernetes components...
	I0315 07:02:16.938536 3301372 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0315 07:02:16.938696 3301372 config.go:182] Loaded profile config "addons-639618": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0315 07:02:16.938713 3301372 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0315 07:02:16.942608 3301372 addons.go:69] Setting yakd=true in profile "addons-639618"
	I0315 07:02:16.942614 3301372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:02:16.942629 3301372 addons.go:234] Setting addon yakd=true in "addons-639618"
	I0315 07:02:16.942657 3301372 host.go:66] Checking if "addons-639618" exists ...
	I0315 07:02:16.942717 3301372 addons.go:69] Setting ingress-dns=true in profile "addons-639618"
	I0315 07:02:16.942735 3301372 addons.go:234] Setting addon ingress-dns=true in "addons-639618"
	I0315 07:02:16.942767 3301372 host.go:66] Checking if "addons-639618" exists ...
	I0315 07:02:16.943195 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:16.943318 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:16.943873 3301372 addons.go:69] Setting inspektor-gadget=true in profile "addons-639618"
	I0315 07:02:16.943907 3301372 addons.go:234] Setting addon inspektor-gadget=true in "addons-639618"
	I0315 07:02:16.943946 3301372 host.go:66] Checking if "addons-639618" exists ...
	I0315 07:02:16.944358 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:16.945835 3301372 addons.go:69] Setting cloud-spanner=true in profile "addons-639618"
	I0315 07:02:16.945877 3301372 addons.go:234] Setting addon cloud-spanner=true in "addons-639618"
	I0315 07:02:16.945912 3301372 host.go:66] Checking if "addons-639618" exists ...
	I0315 07:02:16.946325 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:16.948446 3301372 addons.go:69] Setting metrics-server=true in profile "addons-639618"
	I0315 07:02:16.948482 3301372 addons.go:234] Setting addon metrics-server=true in "addons-639618"
	I0315 07:02:16.948524 3301372 host.go:66] Checking if "addons-639618" exists ...
	I0315 07:02:16.948958 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:16.951369 3301372 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-639618"
	I0315 07:02:16.951436 3301372 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-639618"
	I0315 07:02:16.951466 3301372 host.go:66] Checking if "addons-639618" exists ...
	I0315 07:02:16.951856 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:16.957018 3301372 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-639618"
	I0315 07:02:16.957066 3301372 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-639618"
	I0315 07:02:16.957102 3301372 host.go:66] Checking if "addons-639618" exists ...
	I0315 07:02:16.957535 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:16.959206 3301372 addons.go:69] Setting default-storageclass=true in profile "addons-639618"
	I0315 07:02:16.959244 3301372 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-639618"
	I0315 07:02:16.959701 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:16.969219 3301372 addons.go:69] Setting registry=true in profile "addons-639618"
	I0315 07:02:16.969265 3301372 addons.go:234] Setting addon registry=true in "addons-639618"
	I0315 07:02:16.969303 3301372 host.go:66] Checking if "addons-639618" exists ...
	I0315 07:02:16.969844 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:16.978318 3301372 addons.go:69] Setting gcp-auth=true in profile "addons-639618"
	I0315 07:02:16.978493 3301372 mustload.go:65] Loading cluster: addons-639618
	I0315 07:02:16.978678 3301372 config.go:182] Loaded profile config "addons-639618": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0315 07:02:16.978918 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:16.983165 3301372 addons.go:69] Setting storage-provisioner=true in profile "addons-639618"
	I0315 07:02:16.983316 3301372 addons.go:234] Setting addon storage-provisioner=true in "addons-639618"
	I0315 07:02:16.983487 3301372 host.go:66] Checking if "addons-639618" exists ...
	I0315 07:02:16.988526 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:16.998574 3301372 addons.go:69] Setting ingress=true in profile "addons-639618"
	I0315 07:02:16.998621 3301372 addons.go:234] Setting addon ingress=true in "addons-639618"
	I0315 07:02:16.998665 3301372 host.go:66] Checking if "addons-639618" exists ...
	I0315 07:02:17.007832 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:16.998574 3301372 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-639618"
	I0315 07:02:17.048454 3301372 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-639618"
	I0315 07:02:17.048876 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:16.998589 3301372 addons.go:69] Setting volumesnapshots=true in profile "addons-639618"
	I0315 07:02:17.054600 3301372 addons.go:234] Setting addon volumesnapshots=true in "addons-639618"
	I0315 07:02:17.079993 3301372 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0315 07:02:17.107580 3301372 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0315 07:02:17.107601 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0315 07:02:17.107661 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:02:17.124755 3301372 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0315 07:02:17.079905 3301372 host.go:66] Checking if "addons-639618" exists ...
	I0315 07:02:17.107385 3301372 host.go:66] Checking if "addons-639618" exists ...
	I0315 07:02:17.126721 3301372 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0315 07:02:17.126777 3301372 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:02:17.126784 3301372 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0315 07:02:17.127244 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:17.128539 3301372 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0315 07:02:17.134251 3301372 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0315 07:02:17.134276 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0315 07:02:17.134431 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:02:17.141418 3301372 addons.go:234] Setting addon default-storageclass=true in "addons-639618"
	I0315 07:02:17.141514 3301372 host.go:66] Checking if "addons-639618" exists ...
	I0315 07:02:17.142018 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:17.147178 3301372 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0315 07:02:17.149565 3301372 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0315 07:02:17.149588 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0315 07:02:17.149660 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:02:17.180515 3301372 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0315 07:02:17.183694 3301372 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0315 07:02:17.183758 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0315 07:02:17.183859 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:02:17.173698 3301372 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:02:17.173729 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:02:17.191646 3301372 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:02:17.191672 3301372 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0315 07:02:17.191735 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:02:17.195674 3301372 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0315 07:02:17.204956 3301372 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0315 07:02:17.207153 3301372 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0315 07:02:17.204875 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:02:17.204889 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0315 07:02:17.234036 3301372 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0315 07:02:17.240378 3301372 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0315 07:02:17.234293 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:02:17.234538 3301372 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0315 07:02:17.234585 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:02:17.258019 3301372 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0315 07:02:17.257931 3301372 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-639618"
	I0315 07:02:17.279732 3301372 out.go:177]   - Using image docker.io/registry:2.8.3
	I0315 07:02:17.282016 3301372 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0315 07:02:17.263450 3301372 host.go:66] Checking if "addons-639618" exists ...
	I0315 07:02:17.263469 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0315 07:02:17.286341 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:02:17.311356 3301372 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0315 07:02:17.311379 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0315 07:02:17.311443 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:02:17.323930 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:02:17.333928 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:17.339407 3301372 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0315 07:02:17.351499 3301372 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0315 07:02:17.354459 3301372 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0315 07:02:17.367233 3301372 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0315 07:02:17.369704 3301372 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0315 07:02:17.369727 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0315 07:02:17.369799 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:02:17.372671 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:02:17.393614 3301372 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0315 07:02:17.383361 3301372 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:02:17.383416 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:02:17.390105 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:02:17.399307 3301372 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0315 07:02:17.399346 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0315 07:02:17.399417 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:02:17.400893 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:02:17.400965 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:02:17.431311 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:02:17.494923 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:02:17.505641 3301372 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0315 07:02:17.505868 3301372 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:02:17.506848 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:02:17.535215 3301372 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0315 07:02:17.543227 3301372 out.go:177]   - Using image docker.io/busybox:stable
	I0315 07:02:17.545631 3301372 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0315 07:02:17.545652 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0315 07:02:17.545720 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:02:17.558315 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:02:17.561386 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:02:17.580708 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:02:17.600557 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:02:17.611416 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:02:17.616494 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:02:18.011407 3301372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0315 07:02:18.113733 3301372 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0315 07:02:18.113807 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0315 07:02:18.125136 3301372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:02:18.178760 3301372 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0315 07:02:18.178834 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0315 07:02:18.181195 3301372 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0315 07:02:18.181265 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0315 07:02:18.184217 3301372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0315 07:02:18.187879 3301372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0315 07:02:18.236150 3301372 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:02:18.236220 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0315 07:02:18.266327 3301372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0315 07:02:18.270600 3301372 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0315 07:02:18.270669 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0315 07:02:18.281560 3301372 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0315 07:02:18.281630 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0315 07:02:18.297561 3301372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:02:18.300823 3301372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0315 07:02:18.310346 3301372 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0315 07:02:18.310385 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0315 07:02:18.372706 3301372 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0315 07:02:18.372727 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0315 07:02:18.416422 3301372 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0315 07:02:18.416496 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0315 07:02:18.435534 3301372 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0315 07:02:18.435605 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0315 07:02:18.450847 3301372 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:02:18.450921 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 07:02:18.515495 3301372 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0315 07:02:18.515557 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0315 07:02:18.519281 3301372 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0315 07:02:18.519350 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0315 07:02:18.567852 3301372 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0315 07:02:18.567951 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0315 07:02:18.668880 3301372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0315 07:02:18.676887 3301372 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0315 07:02:18.676960 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0315 07:02:18.696689 3301372 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:02:18.696760 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:02:18.711334 3301372 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0315 07:02:18.711418 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0315 07:02:18.768063 3301372 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0315 07:02:18.768134 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0315 07:02:18.866047 3301372 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0315 07:02:18.866108 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0315 07:02:18.925862 3301372 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0315 07:02:18.925935 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0315 07:02:19.004049 3301372 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0315 07:02:19.004140 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0315 07:02:19.013286 3301372 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0315 07:02:19.013362 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0315 07:02:19.017297 3301372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:02:19.183478 3301372 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0315 07:02:19.183560 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0315 07:02:19.195124 3301372 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0315 07:02:19.195196 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0315 07:02:19.216818 3301372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0315 07:02:19.312245 3301372 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0315 07:02:19.312314 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0315 07:02:19.334994 3301372 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0315 07:02:19.335065 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0315 07:02:19.343736 3301372 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0315 07:02:19.343807 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0315 07:02:19.534713 3301372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0315 07:02:19.546168 3301372 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0315 07:02:19.546240 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0315 07:02:19.564230 3301372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0315 07:02:19.725917 3301372 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0315 07:02:19.725944 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0315 07:02:19.981753 3301372 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.475844108s)
	I0315 07:02:19.982680 3301372 node_ready.go:35] waiting up to 6m0s for node "addons-639618" to be "Ready" ...
	I0315 07:02:19.994935 3301372 node_ready.go:49] node "addons-639618" has status "Ready":"True"
	I0315 07:02:19.994972 3301372 node_ready.go:38] duration metric: took 12.24961ms for node "addons-639618" to be "Ready" ...
	I0315 07:02:19.994984 3301372 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:02:19.999818 3301372 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.494137351s)
	I0315 07:02:19.999862 3301372 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0315 07:02:20.045319 3301372 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bvwpb" in "kube-system" namespace to be "Ready" ...
	I0315 07:02:20.162300 3301372 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0315 07:02:20.162376 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0315 07:02:20.504259 3301372 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-639618" context rescaled to 1 replicas
	I0315 07:02:20.714266 3301372 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0315 07:02:20.714289 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0315 07:02:21.015283 3301372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0315 07:02:21.983799 3301372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.972295757s)
	I0315 07:02:21.983905 3301372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.858697185s)
	I0315 07:02:21.983953 3301372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.799667777s)
	I0315 07:02:21.983979 3301372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.796027041s)
	I0315 07:02:22.052669 3301372 pod_ready.go:102] pod "coredns-5dd5756b68-bvwpb" in "kube-system" namespace has status "Ready":"False"
	I0315 07:02:23.735791 3301372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.438145427s)
	I0315 07:02:23.736458 3301372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.470055575s)
	I0315 07:02:23.975992 3301372 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0315 07:02:23.976134 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:02:23.994356 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:02:24.087963 3301372 pod_ready.go:102] pod "coredns-5dd5756b68-bvwpb" in "kube-system" namespace has status "Ready":"False"
	I0315 07:02:24.614972 3301372 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0315 07:02:24.693129 3301372 addons.go:234] Setting addon gcp-auth=true in "addons-639618"
	I0315 07:02:24.693228 3301372 host.go:66] Checking if "addons-639618" exists ...
	I0315 07:02:24.693703 3301372 cli_runner.go:164] Run: docker container inspect addons-639618 --format={{.State.Status}}
	I0315 07:02:24.747176 3301372 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0315 07:02:24.747242 3301372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-639618
	I0315 07:02:24.776668 3301372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36680 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/addons-639618/id_rsa Username:docker}
	I0315 07:02:25.969029 3301372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.66817129s)
	I0315 07:02:25.969093 3301372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.300140831s)
	I0315 07:02:25.969799 3301372 addons.go:470] Verifying addon registry=true in "addons-639618"
	I0315 07:02:25.969157 3301372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.951797347s)
	I0315 07:02:25.970010 3301372 addons.go:470] Verifying addon metrics-server=true in "addons-639618"
	I0315 07:02:25.969187 3301372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.752309096s)
	I0315 07:02:25.969240 3301372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.434454475s)
	I0315 07:02:25.969321 3301372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.405021242s)
	W0315 07:02:25.974520 3301372 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0315 07:02:25.974541 3301372 retry.go:31] will retry after 305.881717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0315 07:02:25.969858 3301372 addons.go:470] Verifying addon ingress=true in "addons-639618"
	I0315 07:02:25.977227 3301372 out.go:177] * Verifying ingress addon...
	I0315 07:02:25.974736 3301372 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-639618 service yakd-dashboard -n yakd-dashboard
	
	I0315 07:02:25.981056 3301372 out.go:177] * Verifying registry addon...
	I0315 07:02:25.980219 3301372 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0315 07:02:25.984075 3301372 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0315 07:02:25.989257 3301372 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0315 07:02:25.989284 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:25.990905 3301372 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0315 07:02:25.990928 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:26.281303 3301372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0315 07:02:26.490240 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:26.492680 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:26.553071 3301372 pod_ready.go:102] pod "coredns-5dd5756b68-bvwpb" in "kube-system" namespace has status "Ready":"False"
	I0315 07:02:26.993049 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:26.997976 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:27.383823 3301372 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.636612797s)
	I0315 07:02:27.385890 3301372 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0315 07:02:27.384036 3301372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.368716974s)
	I0315 07:02:27.385994 3301372 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-639618"
	I0315 07:02:27.388478 3301372 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0315 07:02:27.392333 3301372 out.go:177] * Verifying csi-hostpath-driver addon...
	I0315 07:02:27.396272 3301372 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0315 07:02:27.392451 3301372 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0315 07:02:27.396474 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0315 07:02:27.419978 3301372 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0315 07:02:27.420018 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:27.492020 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:27.495666 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:27.507299 3301372 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0315 07:02:27.507326 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0315 07:02:27.535487 3301372 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0315 07:02:27.535513 3301372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0315 07:02:27.634668 3301372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0315 07:02:27.901768 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:27.990668 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:27.991326 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:28.402527 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:28.492210 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:28.493137 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:28.506627 3301372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.225266934s)
	I0315 07:02:28.898983 3301372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.26425902s)
	I0315 07:02:28.901611 3301372 addons.go:470] Verifying addon gcp-auth=true in "addons-639618"
	I0315 07:02:28.904818 3301372 out.go:177] * Verifying gcp-auth addon...
	I0315 07:02:28.904760 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:28.907654 3301372 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0315 07:02:28.911427 3301372 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0315 07:02:28.911450 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:28.989690 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:28.990249 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:29.052629 3301372 pod_ready.go:102] pod "coredns-5dd5756b68-bvwpb" in "kube-system" namespace has status "Ready":"False"
	I0315 07:02:29.402384 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:29.411850 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:29.490391 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:29.490977 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:29.901608 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:29.911959 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:29.991184 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:29.992099 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:30.402799 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:30.411731 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:30.489233 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:30.490009 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:30.905304 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:30.913938 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:30.989789 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:30.989965 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:31.057400 3301372 pod_ready.go:102] pod "coredns-5dd5756b68-bvwpb" in "kube-system" namespace has status "Ready":"False"
	I0315 07:02:31.402409 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:31.412137 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:31.489604 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:31.492946 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:31.905203 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:31.912777 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:31.990965 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:31.991790 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:32.402752 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:32.412711 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:32.489445 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:32.490735 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:32.903165 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:32.911672 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:32.989336 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:32.990133 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:33.053873 3301372 pod_ready.go:92] pod "coredns-5dd5756b68-bvwpb" in "kube-system" namespace has status "Ready":"True"
	I0315 07:02:33.053903 3301372 pod_ready.go:81] duration metric: took 13.008545686s for pod "coredns-5dd5756b68-bvwpb" in "kube-system" namespace to be "Ready" ...
	I0315 07:02:33.053918 3301372 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-p4vzh" in "kube-system" namespace to be "Ready" ...
	I0315 07:02:33.058108 3301372 pod_ready.go:97] error getting pod "coredns-5dd5756b68-p4vzh" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-p4vzh" not found
	I0315 07:02:33.058139 3301372 pod_ready.go:81] duration metric: took 4.213916ms for pod "coredns-5dd5756b68-p4vzh" in "kube-system" namespace to be "Ready" ...
	E0315 07:02:33.058153 3301372 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-p4vzh" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-p4vzh" not found
	I0315 07:02:33.058162 3301372 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-639618" in "kube-system" namespace to be "Ready" ...
	I0315 07:02:33.065268 3301372 pod_ready.go:92] pod "etcd-addons-639618" in "kube-system" namespace has status "Ready":"True"
	I0315 07:02:33.065297 3301372 pod_ready.go:81] duration metric: took 7.127022ms for pod "etcd-addons-639618" in "kube-system" namespace to be "Ready" ...
	I0315 07:02:33.065314 3301372 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-639618" in "kube-system" namespace to be "Ready" ...
	I0315 07:02:33.080775 3301372 pod_ready.go:92] pod "kube-apiserver-addons-639618" in "kube-system" namespace has status "Ready":"True"
	I0315 07:02:33.080806 3301372 pod_ready.go:81] duration metric: took 15.482232ms for pod "kube-apiserver-addons-639618" in "kube-system" namespace to be "Ready" ...
	I0315 07:02:33.080820 3301372 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-639618" in "kube-system" namespace to be "Ready" ...
	I0315 07:02:33.095193 3301372 pod_ready.go:92] pod "kube-controller-manager-addons-639618" in "kube-system" namespace has status "Ready":"True"
	I0315 07:02:33.095221 3301372 pod_ready.go:81] duration metric: took 14.392132ms for pod "kube-controller-manager-addons-639618" in "kube-system" namespace to be "Ready" ...
	I0315 07:02:33.095235 3301372 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vspwv" in "kube-system" namespace to be "Ready" ...
	I0315 07:02:33.250237 3301372 pod_ready.go:92] pod "kube-proxy-vspwv" in "kube-system" namespace has status "Ready":"True"
	I0315 07:02:33.250264 3301372 pod_ready.go:81] duration metric: took 155.019635ms for pod "kube-proxy-vspwv" in "kube-system" namespace to be "Ready" ...
	I0315 07:02:33.250276 3301372 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-639618" in "kube-system" namespace to be "Ready" ...
	I0315 07:02:33.405137 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:33.415719 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:33.496286 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:33.497640 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:33.653408 3301372 pod_ready.go:92] pod "kube-scheduler-addons-639618" in "kube-system" namespace has status "Ready":"True"
	I0315 07:02:33.653470 3301372 pod_ready.go:81] duration metric: took 403.185257ms for pod "kube-scheduler-addons-639618" in "kube-system" namespace to be "Ready" ...
	I0315 07:02:33.653502 3301372 pod_ready.go:38] duration metric: took 13.65850477s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:02:33.653544 3301372 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:02:33.653640 3301372 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:02:33.727578 3301372 api_server.go:72] duration metric: took 16.789099998s to wait for apiserver process to appear ...
	I0315 07:02:33.727648 3301372 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:02:33.727696 3301372 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0315 07:02:33.738058 3301372 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0315 07:02:33.740249 3301372 api_server.go:141] control plane version: v1.28.4
	I0315 07:02:33.740550 3301372 api_server.go:131] duration metric: took 12.861952ms to wait for apiserver health ...
	I0315 07:02:33.740610 3301372 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 07:02:33.861226 3301372 system_pods.go:59] 18 kube-system pods found
	I0315 07:02:33.861302 3301372 system_pods.go:61] "coredns-5dd5756b68-bvwpb" [86171da0-4f9d-435f-b5f3-d7356f09d42d] Running
	I0315 07:02:33.861326 3301372 system_pods.go:61] "csi-hostpath-attacher-0" [5c60dabc-570d-4125-a69e-ba040ffd7aff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0315 07:02:33.861348 3301372 system_pods.go:61] "csi-hostpath-resizer-0" [733e4bea-425f-4afe-87fa-d2f0b73eab1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0315 07:02:33.861383 3301372 system_pods.go:61] "csi-hostpathplugin-jnlcm" [d115f93c-0057-442b-a23b-0f133137a308] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0315 07:02:33.861412 3301372 system_pods.go:61] "etcd-addons-639618" [97a1e6f6-6fce-4ab0-8887-0c411a8ddcb2] Running
	I0315 07:02:33.861432 3301372 system_pods.go:61] "kindnet-fvhvh" [841d1dac-5347-4ffc-baf9-b0f2f6e0e2bc] Running
	I0315 07:02:33.861451 3301372 system_pods.go:61] "kube-apiserver-addons-639618" [9ed74360-27d6-4eee-9414-ea8a9b844b6f] Running
	I0315 07:02:33.861470 3301372 system_pods.go:61] "kube-controller-manager-addons-639618" [5a14b099-9477-4313-a063-83e550b580ff] Running
	I0315 07:02:33.861507 3301372 system_pods.go:61] "kube-ingress-dns-minikube" [ad86bb35-c921-4b93-a579-dfb19cb64d05] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0315 07:02:33.861531 3301372 system_pods.go:61] "kube-proxy-vspwv" [257662cf-5b9f-4583-85c1-ab3943791538] Running
	I0315 07:02:33.861551 3301372 system_pods.go:61] "kube-scheduler-addons-639618" [573d3180-c01b-4110-aa40-faa1ef068bcc] Running
	I0315 07:02:33.861571 3301372 system_pods.go:61] "metrics-server-69cf46c98-wzppg" [abbd5285-7d78-4282-a24a-889b0049d7bf] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:02:33.861606 3301372 system_pods.go:61] "nvidia-device-plugin-daemonset-kmtmz" [c36f2e50-18db-430e-802c-18aea031ca4a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0315 07:02:33.861628 3301372 system_pods.go:61] "registry-j6pq4" [20915188-e06b-4cee-8a92-daac71a39bdc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0315 07:02:33.861648 3301372 system_pods.go:61] "registry-proxy-q6qc2" [d1ac4493-7e3b-45fb-be8d-76cde45f44bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0315 07:02:33.861668 3301372 system_pods.go:61] "snapshot-controller-58dbcc7b99-7p968" [cd6062c6-df08-433e-b6b1-a75b24984b88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 07:02:33.861702 3301372 system_pods.go:61] "snapshot-controller-58dbcc7b99-mtvvn" [dc81cbac-e66e-4a1c-8880-7782db4763af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 07:02:33.861723 3301372 system_pods.go:61] "storage-provisioner" [e17d18f0-d969-4b0e-a7e4-c1dd02707dc6] Running
	I0315 07:02:33.861741 3301372 system_pods.go:74] duration metric: took 121.108ms to wait for pod list to return data ...
	I0315 07:02:33.861762 3301372 default_sa.go:34] waiting for default service account to be created ...
	I0315 07:02:33.902337 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:33.913658 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:33.990633 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:33.992299 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:34.050493 3301372 default_sa.go:45] found service account: "default"
	I0315 07:02:34.050532 3301372 default_sa.go:55] duration metric: took 188.74186ms for default service account to be created ...
	I0315 07:02:34.050547 3301372 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 07:02:34.258721 3301372 system_pods.go:86] 18 kube-system pods found
	I0315 07:02:34.258767 3301372 system_pods.go:89] "coredns-5dd5756b68-bvwpb" [86171da0-4f9d-435f-b5f3-d7356f09d42d] Running
	I0315 07:02:34.258778 3301372 system_pods.go:89] "csi-hostpath-attacher-0" [5c60dabc-570d-4125-a69e-ba040ffd7aff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0315 07:02:34.258789 3301372 system_pods.go:89] "csi-hostpath-resizer-0" [733e4bea-425f-4afe-87fa-d2f0b73eab1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0315 07:02:34.258801 3301372 system_pods.go:89] "csi-hostpathplugin-jnlcm" [d115f93c-0057-442b-a23b-0f133137a308] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0315 07:02:34.258811 3301372 system_pods.go:89] "etcd-addons-639618" [97a1e6f6-6fce-4ab0-8887-0c411a8ddcb2] Running
	I0315 07:02:34.258816 3301372 system_pods.go:89] "kindnet-fvhvh" [841d1dac-5347-4ffc-baf9-b0f2f6e0e2bc] Running
	I0315 07:02:34.258820 3301372 system_pods.go:89] "kube-apiserver-addons-639618" [9ed74360-27d6-4eee-9414-ea8a9b844b6f] Running
	I0315 07:02:34.258825 3301372 system_pods.go:89] "kube-controller-manager-addons-639618" [5a14b099-9477-4313-a063-83e550b580ff] Running
	I0315 07:02:34.258842 3301372 system_pods.go:89] "kube-ingress-dns-minikube" [ad86bb35-c921-4b93-a579-dfb19cb64d05] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0315 07:02:34.258851 3301372 system_pods.go:89] "kube-proxy-vspwv" [257662cf-5b9f-4583-85c1-ab3943791538] Running
	I0315 07:02:34.258855 3301372 system_pods.go:89] "kube-scheduler-addons-639618" [573d3180-c01b-4110-aa40-faa1ef068bcc] Running
	I0315 07:02:34.258865 3301372 system_pods.go:89] "metrics-server-69cf46c98-wzppg" [abbd5285-7d78-4282-a24a-889b0049d7bf] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0315 07:02:34.258891 3301372 system_pods.go:89] "nvidia-device-plugin-daemonset-kmtmz" [c36f2e50-18db-430e-802c-18aea031ca4a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0315 07:02:34.258903 3301372 system_pods.go:89] "registry-j6pq4" [20915188-e06b-4cee-8a92-daac71a39bdc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0315 07:02:34.258920 3301372 system_pods.go:89] "registry-proxy-q6qc2" [d1ac4493-7e3b-45fb-be8d-76cde45f44bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0315 07:02:34.258936 3301372 system_pods.go:89] "snapshot-controller-58dbcc7b99-7p968" [cd6062c6-df08-433e-b6b1-a75b24984b88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 07:02:34.258944 3301372 system_pods.go:89] "snapshot-controller-58dbcc7b99-mtvvn" [dc81cbac-e66e-4a1c-8880-7782db4763af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 07:02:34.258951 3301372 system_pods.go:89] "storage-provisioner" [e17d18f0-d969-4b0e-a7e4-c1dd02707dc6] Running
	I0315 07:02:34.258958 3301372 system_pods.go:126] duration metric: took 208.397959ms to wait for k8s-apps to be running ...
	I0315 07:02:34.258970 3301372 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 07:02:34.259051 3301372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:02:34.337090 3301372 system_svc.go:56] duration metric: took 78.10811ms WaitForService to wait for kubelet
	I0315 07:02:34.337124 3301372 kubeadm.go:576] duration metric: took 17.398650786s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:02:34.337154 3301372 node_conditions.go:102] verifying NodePressure condition ...
	I0315 07:02:34.403765 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:34.412220 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:34.450322 3301372 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0315 07:02:34.450401 3301372 node_conditions.go:123] node cpu capacity is 2
	I0315 07:02:34.450428 3301372 node_conditions.go:105] duration metric: took 113.2682ms to run NodePressure ...
	I0315 07:02:34.450454 3301372 start.go:240] waiting for startup goroutines ...
	I0315 07:02:34.505586 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:34.506486 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:34.904662 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:34.912511 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:34.995261 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:34.996351 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:35.408452 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:35.416200 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:35.492241 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:35.515424 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:35.903191 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:35.911986 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:35.990085 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:35.992846 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:36.403051 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:36.419521 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:36.489967 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:36.492425 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:36.902400 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:36.912186 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:36.990997 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:36.992112 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:37.402570 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:37.412491 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:37.491155 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:37.491872 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:37.901791 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:37.912098 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:37.989393 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:37.991601 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:38.401959 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:38.412712 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:38.490584 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:38.491606 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:38.906441 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:38.911966 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:38.991471 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:38.992620 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:39.403155 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:39.412723 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:39.491556 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:39.492357 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:39.903205 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:39.912518 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:39.988140 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:39.990450 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:40.407621 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:40.412722 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:40.490859 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:40.491635 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:40.903477 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:40.912473 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:40.992494 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:40.993265 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:41.403888 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:41.413839 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:41.488810 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:41.489988 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:41.901580 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:41.912255 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:41.988861 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:41.990396 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:42.409229 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:42.417799 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:42.493662 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:42.494212 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:42.902876 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:42.911289 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:42.988252 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:42.989645 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:43.411804 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:43.413239 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:43.488094 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:43.489631 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:43.902829 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:43.911270 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:43.989485 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:43.990533 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:44.402461 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:44.412304 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:44.489974 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 07:02:44.490904 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:44.901802 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:44.911804 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:44.988577 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:44.989707 3301372 kapi.go:107] duration metric: took 19.00562913s to wait for kubernetes.io/minikube-addons=registry ...
	I0315 07:02:45.407542 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:45.413242 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:45.488360 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:45.902341 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:45.912246 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:45.988546 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:46.401763 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:46.412160 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:46.487986 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:46.901976 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:46.914190 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:46.987769 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:47.401906 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:47.412435 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:47.489568 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:47.902881 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:47.912134 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:47.987654 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:48.402405 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:48.412676 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:48.488626 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:48.902317 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:48.912090 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:48.987568 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:49.402070 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:49.412023 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:49.488962 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:49.902687 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:49.923143 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:49.987638 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:50.402550 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:50.412309 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:50.488216 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:50.903445 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:50.914217 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:50.988868 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:51.403049 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:51.411876 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:51.489143 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:51.902580 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:51.912288 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:51.989081 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:52.401991 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:52.411666 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:52.487936 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:52.910668 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:52.915534 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:52.988383 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:53.402486 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:53.413211 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:53.489806 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:53.904963 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:53.912407 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:53.988617 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:54.402695 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:54.411574 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:54.488296 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:54.902457 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:54.916000 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:54.987849 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:55.401588 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:55.412573 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:55.487519 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:55.903965 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:55.913088 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:55.992122 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:56.403561 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:56.411929 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:56.490139 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:56.904664 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:56.912327 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:56.988078 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:57.402808 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:57.411345 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:57.488344 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:57.904802 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:57.911958 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:57.988099 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:58.402007 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:58.411552 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:58.489061 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:58.902083 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:58.923520 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:58.987610 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:59.401877 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:59.412700 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:59.488647 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:02:59.903068 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:02:59.912002 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:02:59.988910 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:03:00.402476 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:00.413064 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:00.488224 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:03:00.902438 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:00.911991 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:00.989546 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:03:01.405604 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:01.412454 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:01.488353 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:03:01.902773 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:01.911876 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:01.989316 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:03:02.402967 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:02.414450 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:02.517497 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:03:02.902775 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:02.912151 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:02.987327 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:03:03.402678 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:03.413310 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:03.491040 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:03:03.902520 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:03.911701 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:03.987942 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:03:04.403164 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:04.416724 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:04.489349 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:03:04.902860 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:04.912913 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:04.988558 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:03:05.402914 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:05.411548 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:05.488395 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:03:05.902392 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:05.911842 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:05.987851 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:03:06.402406 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:06.412436 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:06.488725 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:03:06.904286 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:06.912497 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:06.988197 3301372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 07:03:07.402669 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:07.412207 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:07.488075 3301372 kapi.go:107] duration metric: took 41.50785167s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0315 07:03:07.902290 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:07.912120 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:08.402943 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:08.413378 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:08.907089 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:08.914440 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 07:03:09.404900 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:09.412012 3301372 kapi.go:107] duration metric: took 40.504359869s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0315 07:03:09.414481 3301372 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-639618 cluster.
	I0315 07:03:09.416550 3301372 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0315 07:03:09.418530 3301372 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0315 07:03:09.903050 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:10.401565 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:10.901825 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:11.402545 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:11.903696 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:12.404927 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:12.901942 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:13.402822 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:13.903157 3301372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 07:03:14.401773 3301372 kapi.go:107] duration metric: took 47.005498081s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0315 07:03:14.404305 3301372 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, storage-provisioner-rancher, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0315 07:03:14.406430 3301372 addons.go:505] duration metric: took 57.467715408s for enable addons: enabled=[ingress-dns nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner storage-provisioner-rancher metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0315 07:03:14.406486 3301372 start.go:245] waiting for cluster config update ...
	I0315 07:03:14.406513 3301372 start.go:254] writing updated cluster config ...
	I0315 07:03:14.406834 3301372 ssh_runner.go:195] Run: rm -f paused
	I0315 07:03:14.770057 3301372 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0315 07:03:14.773316 3301372 out.go:177] * Done! kubectl is now configured to use "addons-639618" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	81612a774413a       dd1b12fcb6097       8 seconds ago        Exited              hello-world-app                          2                   b93bdc2f9fa9e       hello-world-app-5d77478584-9x7n4
	19a4415f573b4       be5e6f23a9904       34 seconds ago       Running             nginx                                    0                   0281a6ca7f65b       nginx
	bbaed215175c4       ee6d597e62dc8       About a minute ago   Running             csi-snapshotter                          0                   e3c81c7cf0285       csi-hostpathplugin-jnlcm
	d42851d8379af       642ded511e141       About a minute ago   Running             csi-provisioner                          0                   e3c81c7cf0285       csi-hostpathplugin-jnlcm
	3c2ab171b1e89       922312104da8a       About a minute ago   Running             liveness-probe                           0                   e3c81c7cf0285       csi-hostpathplugin-jnlcm
	fdcbc12262821       08f6b2990811a       About a minute ago   Running             hostpath                                 0                   e3c81c7cf0285       csi-hostpathplugin-jnlcm
	4ee586e9e689c       6ef582f3ec844       About a minute ago   Running             gcp-auth                                 0                   f97d76de001e6       gcp-auth-7d69788767-p7hnn
	c7ab3dff9b144       0107d56dbc0be       About a minute ago   Running             node-driver-registrar                    0                   e3c81c7cf0285       csi-hostpathplugin-jnlcm
	beb76c667a879       41340d5d57adb       About a minute ago   Running             cloud-spanner-emulator                   0                   76970e71b2e9d       cloud-spanner-emulator-6548d5df46-l95pc
	a27de48472f1b       c0cfb4ce73bda       About a minute ago   Running             nvidia-device-plugin-ctr                 0                   d5116177efe4b       nvidia-device-plugin-daemonset-kmtmz
	049b4dadc54d5       487fa743e1e22       About a minute ago   Running             csi-resizer                              0                   bbc6551863c16       csi-hostpath-resizer-0
	8691dcfdb6997       9a80d518f102c       About a minute ago   Running             csi-attacher                             0                   6a33defaf041c       csi-hostpath-attacher-0
	8fdf23f676f75       1461903ec4fe9       About a minute ago   Running             csi-external-health-monitor-controller   0                   e3c81c7cf0285       csi-hostpathplugin-jnlcm
	6e196a336d50b       1a024e390dd05       About a minute ago   Exited              patch                                    0                   0d46c9f59f462       ingress-nginx-admission-patch-hz9lg
	8123653d22863       1a024e390dd05       About a minute ago   Exited              create                                   0                   a52e0236da929       ingress-nginx-admission-create-lzhgs
	014c50f35ca51       20e3f2db01e81       About a minute ago   Running             yakd                                     0                   7ce8e0b8b9f20       yakd-dashboard-9947fc6bf-vpjlb
	3790cffab56df       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller               0                   c879bd67cda8d       snapshot-controller-58dbcc7b99-mtvvn
	f24d57b38493f       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller               0                   0cdc81695cf57       snapshot-controller-58dbcc7b99-7p968
	159ee1e0baed2       7ce2150c8929b       About a minute ago   Running             local-path-provisioner                   0                   985b3647b82f9       local-path-provisioner-78b46b4d5c-clf9r
	060dc18dc1960       97e04611ad434       About a minute ago   Running             coredns                                  0                   17f9d011ad23d       coredns-5dd5756b68-bvwpb
	3cac7a511f26e       ba04bb24b9575       About a minute ago   Running             storage-provisioner                      0                   b30ad3cbb0fea       storage-provisioner
	13b76c73cb8f0       4740c1948d3fc       2 minutes ago        Running             kindnet-cni                              0                   5151acaa2f14e       kindnet-fvhvh
	eaeca899ba0f1       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                               0                   26b26e9463ca7       kube-proxy-vspwv
	dcd03964d9cdf       9cdd6470f48c8       2 minutes ago        Running             etcd                                     0                   2c8dddaaff1b1       etcd-addons-639618
	889bca7960390       04b4c447bb9d4       2 minutes ago        Running             kube-apiserver                           0                   2602259836f0e       kube-apiserver-addons-639618
	e76bd522fc85f       9961cbceaf234       2 minutes ago        Running             kube-controller-manager                  0                   68f0305f44e77       kube-controller-manager-addons-639618
	0381738804c5f       05c284c929889       2 minutes ago        Running             kube-scheduler                           0                   e418725d06a16       kube-scheduler-addons-639618
	
	
	==> containerd <==
	Mar 15 07:04:17 addons-639618 containerd[756]: time="2024-03-15T07:04:17.239718693Z" level=info msg="cleaning up dead shim"
	Mar 15 07:04:17 addons-639618 containerd[756]: time="2024-03-15T07:04:17.248332015Z" level=warning msg="cleanup warnings time=\"2024-03-15T07:04:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8509 runtime=io.containerd.runc.v2\n"
	Mar 15 07:04:17 addons-639618 containerd[756]: time="2024-03-15T07:04:17.288109244Z" level=info msg="TearDown network for sandbox \"4387336d8b3c675f2f61944ccc744ff28ecc98eb6df913f8febfa8d5804e61e1\" successfully"
	Mar 15 07:04:17 addons-639618 containerd[756]: time="2024-03-15T07:04:17.288278890Z" level=info msg="StopPodSandbox for \"4387336d8b3c675f2f61944ccc744ff28ecc98eb6df913f8febfa8d5804e61e1\" returns successfully"
	Mar 15 07:04:17 addons-639618 containerd[756]: time="2024-03-15T07:04:17.351013401Z" level=info msg="RemoveContainer for \"c1b8224c32dfad61047a116ba320044c3cfe0d74f2c34351515f088b5b7bb485\""
	Mar 15 07:04:17 addons-639618 containerd[756]: time="2024-03-15T07:04:17.356717473Z" level=info msg="RemoveContainer for \"c1b8224c32dfad61047a116ba320044c3cfe0d74f2c34351515f088b5b7bb485\" returns successfully"
	Mar 15 07:04:17 addons-639618 containerd[756]: time="2024-03-15T07:04:17.357322899Z" level=error msg="ContainerStatus for \"c1b8224c32dfad61047a116ba320044c3cfe0d74f2c34351515f088b5b7bb485\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1b8224c32dfad61047a116ba320044c3cfe0d74f2c34351515f088b5b7bb485\": not found"
	Mar 15 07:04:20 addons-639618 containerd[756]: time="2024-03-15T07:04:20.976908014Z" level=info msg="StopContainer for \"dbc94a29779ddd5b212bc8c17782b7a5fbbdb96c0e7168c25f47d046cea8f3d0\" with timeout 30 (s)"
	Mar 15 07:04:20 addons-639618 containerd[756]: time="2024-03-15T07:04:20.977296517Z" level=info msg="Stop container \"dbc94a29779ddd5b212bc8c17782b7a5fbbdb96c0e7168c25f47d046cea8f3d0\" with signal quit"
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.045642312Z" level=info msg="shim disconnected" id=dbc94a29779ddd5b212bc8c17782b7a5fbbdb96c0e7168c25f47d046cea8f3d0
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.045709797Z" level=warning msg="cleaning up after shim disconnected" id=dbc94a29779ddd5b212bc8c17782b7a5fbbdb96c0e7168c25f47d046cea8f3d0 namespace=k8s.io
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.045723450Z" level=info msg="cleaning up dead shim"
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.053911403Z" level=warning msg="cleanup warnings time=\"2024-03-15T07:04:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8578 runtime=io.containerd.runc.v2\n"
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.058033054Z" level=info msg="StopContainer for \"dbc94a29779ddd5b212bc8c17782b7a5fbbdb96c0e7168c25f47d046cea8f3d0\" returns successfully"
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.058613422Z" level=info msg="StopPodSandbox for \"16491ddab1f8bd37bf0b03c012c16a262e9dfce1e9c4a85b00a563821ed38601\""
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.058677815Z" level=info msg="Container to stop \"dbc94a29779ddd5b212bc8c17782b7a5fbbdb96c0e7168c25f47d046cea8f3d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.093736911Z" level=info msg="shim disconnected" id=16491ddab1f8bd37bf0b03c012c16a262e9dfce1e9c4a85b00a563821ed38601
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.094629665Z" level=warning msg="cleaning up after shim disconnected" id=16491ddab1f8bd37bf0b03c012c16a262e9dfce1e9c4a85b00a563821ed38601 namespace=k8s.io
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.094748448Z" level=info msg="cleaning up dead shim"
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.104179309Z" level=warning msg="cleanup warnings time=\"2024-03-15T07:04:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8612 runtime=io.containerd.runc.v2\n"
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.119920784Z" level=info msg="TearDown network for sandbox \"16491ddab1f8bd37bf0b03c012c16a262e9dfce1e9c4a85b00a563821ed38601\" successfully"
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.120144541Z" level=info msg="StopPodSandbox for \"16491ddab1f8bd37bf0b03c012c16a262e9dfce1e9c4a85b00a563821ed38601\" returns successfully"
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.376630807Z" level=info msg="RemoveContainer for \"dbc94a29779ddd5b212bc8c17782b7a5fbbdb96c0e7168c25f47d046cea8f3d0\""
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.388940616Z" level=info msg="RemoveContainer for \"dbc94a29779ddd5b212bc8c17782b7a5fbbdb96c0e7168c25f47d046cea8f3d0\" returns successfully"
	Mar 15 07:04:21 addons-639618 containerd[756]: time="2024-03-15T07:04:21.395464460Z" level=error msg="ContainerStatus for \"dbc94a29779ddd5b212bc8c17782b7a5fbbdb96c0e7168c25f47d046cea8f3d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbc94a29779ddd5b212bc8c17782b7a5fbbdb96c0e7168c25f47d046cea8f3d0\": not found"
	
	
	==> coredns [060dc18dc196090692156ae732c56af6857a88a5fc76857e8de6f2a822ddaaa8] <==
	[INFO] 10.244.0.19:60317 - 16916 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077143s
	[INFO] 10.244.0.19:60317 - 7944 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002278544s
	[INFO] 10.244.0.19:40803 - 53756 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002675335s
	[INFO] 10.244.0.19:60317 - 46011 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001082002s
	[INFO] 10.244.0.19:40803 - 30244 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001919323s
	[INFO] 10.244.0.19:40803 - 40462 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00013278s
	[INFO] 10.244.0.19:60317 - 31958 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000038153s
	[INFO] 10.244.0.19:49952 - 35001 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000104999s
	[INFO] 10.244.0.19:49952 - 52464 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000092921s
	[INFO] 10.244.0.19:58890 - 51732 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000064187s
	[INFO] 10.244.0.19:49952 - 25962 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000071162s
	[INFO] 10.244.0.19:58890 - 19127 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000156337s
	[INFO] 10.244.0.19:49952 - 30752 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00007766s
	[INFO] 10.244.0.19:49952 - 42498 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062604s
	[INFO] 10.244.0.19:58890 - 35849 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043051s
	[INFO] 10.244.0.19:49952 - 51289 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004082s
	[INFO] 10.244.0.19:58890 - 59672 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035297s
	[INFO] 10.244.0.19:58890 - 53591 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005549s
	[INFO] 10.244.0.19:49952 - 42540 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00168047s
	[INFO] 10.244.0.19:58890 - 25220 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.001807047s
	[INFO] 10.244.0.19:58890 - 38021 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001592029s
	[INFO] 10.244.0.19:49952 - 45556 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001580919s
	[INFO] 10.244.0.19:49952 - 22261 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000334023s
	[INFO] 10.244.0.19:58890 - 15070 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001321053s
	[INFO] 10.244.0.19:58890 - 42747 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000096753s
	
	
	==> describe nodes <==
	Name:               addons-639618
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-639618
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=addons-639618
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T07_02_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-639618
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-639618"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 07:02:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-639618
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 07:04:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 07:04:07 +0000   Fri, 15 Mar 2024 07:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 07:04:07 +0000   Fri, 15 Mar 2024 07:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 07:04:07 +0000   Fri, 15 Mar 2024 07:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 07:04:07 +0000   Fri, 15 Mar 2024 07:02:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-639618
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 ff0e3c7f8e2246ef945a373eec8fe60b
	  System UUID:                9cb7793f-455a-4a00-b842-f12ca0e0c822
	  Boot ID:                    be4a23ea-b3ea-44f1-92fd-06f8e96fb1b3
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-l95pc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  default                     hello-world-app-5d77478584-9x7n4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  gcp-auth                    gcp-auth-7d69788767-p7hnn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 coredns-5dd5756b68-bvwpb                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m6s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 csi-hostpath-resizer-0                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 csi-hostpathplugin-jnlcm                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 etcd-addons-639618                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m20s
	  kube-system                 kindnet-fvhvh                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m6s
	  kube-system                 kube-apiserver-addons-639618               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-controller-manager-addons-639618      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-proxy-vspwv                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                 kube-scheduler-addons-639618               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 nvidia-device-plugin-daemonset-kmtmz       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 snapshot-controller-58dbcc7b99-7p968       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 snapshot-controller-58dbcc7b99-mtvvn       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  local-path-storage          local-path-provisioner-78b46b4d5c-clf9r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  yakd-dashboard              yakd-dashboard-9947fc6bf-vpjlb             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     119s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m2s   kube-proxy       
	  Normal  Starting                 2m18s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m18s  kubelet          Node addons-639618 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m18s  kubelet          Node addons-639618 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m18s  kubelet          Node addons-639618 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m18s  kubelet          Node addons-639618 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m18s  kubelet          Node addons-639618 status is now: NodeReady
	  Normal  RegisteredNode           2m7s   node-controller  Node addons-639618 event: Registered Node addons-639618 in Controller
	
	
	==> dmesg <==
	[  +0.001206] FS-Cache: O-key=[8] 'e76e3b0000000000'
	[  +0.000791] FS-Cache: N-cookie c=000000c1 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.001223] FS-Cache: N-cookie d=000000007332a028{9p.inode} n=00000000304cb3f2
	[  +0.001202] FS-Cache: N-key=[8] 'e76e3b0000000000'
	[  +2.499631] FS-Cache: Duplicate cookie detected
	[  +0.000804] FS-Cache: O-cookie c=000000b8 [p=000000b7 fl=226 nc=0 na=1]
	[  +0.001035] FS-Cache: O-cookie d=000000007332a028{9p.inode} n=000000005145f6ae
	[  +0.001144] FS-Cache: O-key=[8] 'e66e3b0000000000'
	[  +0.000727] FS-Cache: N-cookie c=000000c3 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000928] FS-Cache: N-cookie d=000000007332a028{9p.inode} n=000000005d6d74fc
	[  +0.001043] FS-Cache: N-key=[8] 'e66e3b0000000000'
	[  +0.355130] FS-Cache: Duplicate cookie detected
	[  +0.000930] FS-Cache: O-cookie c=000000bd [p=000000b7 fl=226 nc=0 na=1]
	[  +0.001411] FS-Cache: O-cookie d=000000007332a028{9p.inode} n=000000005519e727
	[  +0.001580] FS-Cache: O-key=[8] 'ec6e3b0000000000'
	[  +0.001146] FS-Cache: N-cookie c=000000c4 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.001550] FS-Cache: N-cookie d=000000007332a028{9p.inode} n=000000007ad487f1
	[  +0.002031] FS-Cache: N-key=[8] 'ec6e3b0000000000'
	[  +3.702032] FS-Cache: Duplicate cookie detected
	[  +0.000790] FS-Cache: O-cookie c=000000c6 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001101] FS-Cache: O-cookie d=0000000088b624c4{9P.session} n=000000006e6fcce9
	[  +0.001163] FS-Cache: O-key=[10] '34333038303033373036'
	[  +0.000916] FS-Cache: N-cookie c=000000c7 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001032] FS-Cache: N-cookie d=0000000088b624c4{9P.session} n=000000008a7bb1d9
	[  +0.001164] FS-Cache: N-key=[10] '34333038303033373036'
	
	
	==> etcd [dcd03964d9cdf99d86c94aabc17427c25b555ad95edcd6885f687853998052d9] <==
	{"level":"info","ts":"2024-03-15T07:01:58.187441Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T07:01:58.18824Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T07:01:58.188254Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T07:01:58.187767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-15T07:01:58.188412Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-15T07:01:58.187796Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-15T07:01:58.188488Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-15T07:01:59.035125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-15T07:01:59.035342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-15T07:01:59.03544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-15T07:01:59.035591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-15T07:01:59.035669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-15T07:01:59.035771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-15T07:01:59.035859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-15T07:01:59.039253Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-639618 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-15T07:01:59.039515Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T07:01:59.040748Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-15T07:01:59.041006Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:01:59.041217Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T07:01:59.042239Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-03-15T07:01:59.045836Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T07:01:59.045955Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-15T07:01:59.046415Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:01:59.046623Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T07:01:59.098347Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [4ee586e9e689cf370b2c0905f87f13044f33c6cc3e9ef5941bce2fa56fc0155b] <==
	2024/03/15 07:03:08 GCP Auth Webhook started!
	2024/03/15 07:03:25 Ready to marshal response ...
	2024/03/15 07:03:25 Ready to write response ...
	2024/03/15 07:03:42 Ready to marshal response ...
	2024/03/15 07:03:42 Ready to write response ...
	2024/03/15 07:03:46 Ready to marshal response ...
	2024/03/15 07:03:46 Ready to write response ...
	2024/03/15 07:03:56 Ready to marshal response ...
	2024/03/15 07:03:56 Ready to write response ...
	2024/03/15 07:04:12 Ready to marshal response ...
	2024/03/15 07:04:12 Ready to write response ...
	
	
	==> kernel <==
	 07:04:23 up 15:46,  0 users,  load average: 4.40, 3.77, 3.79
	Linux addons-639618 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [13b76c73cb8f04c684e998fafd50d1e75f30f9c7223138c819ac9c42f81f1297] <==
	I0315 07:02:22.538309       1 main.go:227] handling current node
	I0315 07:02:32.556192       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0315 07:02:32.556276       1 main.go:227] handling current node
	I0315 07:02:42.567364       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0315 07:02:42.567393       1 main.go:227] handling current node
	I0315 07:02:52.571693       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0315 07:02:52.571719       1 main.go:227] handling current node
	I0315 07:03:02.583484       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0315 07:03:02.583513       1 main.go:227] handling current node
	I0315 07:03:12.594770       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0315 07:03:12.594799       1 main.go:227] handling current node
	I0315 07:03:22.606761       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0315 07:03:22.606788       1 main.go:227] handling current node
	I0315 07:03:32.615591       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0315 07:03:32.615622       1 main.go:227] handling current node
	I0315 07:03:42.626739       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0315 07:03:42.626847       1 main.go:227] handling current node
	I0315 07:03:52.652751       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0315 07:03:52.652781       1 main.go:227] handling current node
	I0315 07:04:02.657067       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0315 07:04:02.657388       1 main.go:227] handling current node
	I0315 07:04:12.666688       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0315 07:04:12.666717       1 main.go:227] handling current node
	I0315 07:04:22.685954       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0315 07:04:22.685988       1 main.go:227] handling current node
	
	
	==> kube-apiserver [889bca796039015bdef35e84821581215cc727dada222e4032ccaaea2f5aecd7] <==
	I0315 07:02:27.120346       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.100.118.184"}
	I0315 07:02:27.138742       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0315 07:02:27.311457       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.98.162.26"}
	W0315 07:02:27.928395       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:02:28.695378       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.110.151.177"}
	W0315 07:02:42.765992       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 07:02:42.766057       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:02:42.766414       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0315 07:02:42.766980       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.203.35:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.203.35:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.203.35:443: connect: connection refused
	E0315 07:02:42.769677       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.203.35:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.203.35:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.203.35:443: connect: connection refused
	I0315 07:02:42.863366       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0315 07:03:01.607815       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0315 07:03:28.236496       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x4008564180), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x4006535310), ResponseWriter:(*httpsnoop.rw)(0x4006535310), Flusher:(*httpsnoop.rw)(0x4006535310), CloseNotifier:(*httpsnoop.rw)(0x4006535310), Pusher:(*httpsnoop.rw)(0x4006535310)}}, encoder:(*versioning.codec)(0x4004ca0640), memAllocator:(*runtime.Allocator)(0x400d517c68)})
	I0315 07:03:40.684141       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0315 07:03:40.690270       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0315 07:03:41.728000       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0315 07:03:43.781657       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0315 07:03:46.344750       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0315 07:03:46.735007       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.108.251"}
	I0315 07:03:52.519688       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0315 07:03:56.545295       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.216.140"}
	E0315 07:04:14.174829       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0315 07:04:16.171475       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [e76bd522fc85f9f57712e66d43d7d3c430ea3c19a2d201948aaa94657ae19f12] <==
	W0315 07:03:50.151298       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 07:03:50.151398       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0315 07:03:50.850516       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I0315 07:03:55.627033       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0315 07:03:56.238620       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0315 07:03:56.270352       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-9x7n4"
	I0315 07:03:56.293642       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="55.704653ms"
	I0315 07:03:56.333136       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="39.445988ms"
	I0315 07:03:56.355436       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="22.253281ms"
	I0315 07:03:56.355524       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="53.504µs"
	W0315 07:03:56.750001       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 07:03:56.750034       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0315 07:03:59.281080       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="62.062µs"
	I0315 07:04:00.372717       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="89.072µs"
	I0315 07:04:01.299489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="73.894µs"
	I0315 07:04:01.583205       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0315 07:04:12.094427       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0315 07:04:14.102062       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="4.717µs"
	I0315 07:04:14.106548       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0315 07:04:14.110302       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0315 07:04:14.379554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="73.541µs"
	W0315 07:04:21.780628       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 07:04:21.780733       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0315 07:04:22.797299       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0315 07:04:22.883681       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	
	
	==> kube-proxy [eaeca899ba0f1a393a8ec754263c9b6720cf19e1c6d66e12abbbd0adb1512373] <==
	I0315 07:02:20.361384       1 server_others.go:69] "Using iptables proxy"
	I0315 07:02:20.395760       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0315 07:02:20.440081       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0315 07:02:20.443289       1 server_others.go:152] "Using iptables Proxier"
	I0315 07:02:20.443328       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0315 07:02:20.443337       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0315 07:02:20.443368       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 07:02:20.443570       1 server.go:846] "Version info" version="v1.28.4"
	I0315 07:02:20.443580       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 07:02:20.444750       1 config.go:188] "Starting service config controller"
	I0315 07:02:20.444761       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 07:02:20.444780       1 config.go:97] "Starting endpoint slice config controller"
	I0315 07:02:20.444784       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 07:02:20.445137       1 config.go:315] "Starting node config controller"
	I0315 07:02:20.445143       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 07:02:20.545814       1 shared_informer.go:318] Caches are synced for service config
	I0315 07:02:20.545860       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 07:02:20.545978       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0381738804c5f4c770eae76d006ae265c9dd489868b0c23398651a9d57b1496a] <==
	W0315 07:02:01.991377       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 07:02:01.991467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 07:02:01.991608       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 07:02:01.991694       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 07:02:01.995053       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0315 07:02:01.995226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0315 07:02:01.995444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 07:02:01.995534       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0315 07:02:01.995698       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 07:02:01.995784       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 07:02:01.995969       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 07:02:01.996053       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 07:02:01.996272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 07:02:01.996361       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0315 07:02:01.996540       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 07:02:01.996627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0315 07:02:01.996787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 07:02:01.996869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0315 07:02:02.874934       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 07:02:02.874968       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0315 07:02:02.946495       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 07:02:02.946599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 07:02:03.119377       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 07:02:03.119636       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0315 07:02:06.376368       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.578123    1493 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-mountpoint-dir\") pod \"d115f93c-0057-442b-a23b-0f133137a308\" (UID: \"d115f93c-0057-442b-a23b-0f133137a308\") "
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.578477    1493 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-plugins-dir\") pod \"d115f93c-0057-442b-a23b-0f133137a308\" (UID: \"d115f93c-0057-442b-a23b-0f133137a308\") "
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.578615    1493 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-registration-dir\") pod \"d115f93c-0057-442b-a23b-0f133137a308\" (UID: \"d115f93c-0057-442b-a23b-0f133137a308\") "
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.578640    1493 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-socket-dir\") pod \"d115f93c-0057-442b-a23b-0f133137a308\" (UID: \"d115f93c-0057-442b-a23b-0f133137a308\") "
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.578666    1493 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm2rk\" (UniqueName: \"kubernetes.io/projected/733e4bea-425f-4afe-87fa-d2f0b73eab1d-kube-api-access-dm2rk\") pod \"733e4bea-425f-4afe-87fa-d2f0b73eab1d\" (UID: \"733e4bea-425f-4afe-87fa-d2f0b73eab1d\") "
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.578691    1493 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-csi-data-dir\") pod \"d115f93c-0057-442b-a23b-0f133137a308\" (UID: \"d115f93c-0057-442b-a23b-0f133137a308\") "
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.578771    1493 reconciler_common.go:300] "Volume detached for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5c60dabc-570d-4125-a69e-ba040ffd7aff-socket-dir\") on node \"addons-639618\" DevicePath \"\""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.578784    1493 reconciler_common.go:300] "Volume detached for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/733e4bea-425f-4afe-87fa-d2f0b73eab1d-socket-dir\") on node \"addons-639618\" DevicePath \"\""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.578797    1493 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hd6wj\" (UniqueName: \"kubernetes.io/projected/5c60dabc-570d-4125-a69e-ba040ffd7aff-kube-api-access-hd6wj\") on node \"addons-639618\" DevicePath \"\""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.578825    1493 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-csi-data-dir" (OuterVolumeSpecName: "csi-data-dir") pod "d115f93c-0057-442b-a23b-0f133137a308" (UID: "d115f93c-0057-442b-a23b-0f133137a308"). InnerVolumeSpecName "csi-data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.578850    1493 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-dev-dir" (OuterVolumeSpecName: "dev-dir") pod "d115f93c-0057-442b-a23b-0f133137a308" (UID: "d115f93c-0057-442b-a23b-0f133137a308"). InnerVolumeSpecName "dev-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.578886    1493 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-mountpoint-dir" (OuterVolumeSpecName: "mountpoint-dir") pod "d115f93c-0057-442b-a23b-0f133137a308" (UID: "d115f93c-0057-442b-a23b-0f133137a308"). InnerVolumeSpecName "mountpoint-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.578904    1493 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-plugins-dir" (OuterVolumeSpecName: "plugins-dir") pod "d115f93c-0057-442b-a23b-0f133137a308" (UID: "d115f93c-0057-442b-a23b-0f133137a308"). InnerVolumeSpecName "plugins-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.578923    1493 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-registration-dir" (OuterVolumeSpecName: "registration-dir") pod "d115f93c-0057-442b-a23b-0f133137a308" (UID: "d115f93c-0057-442b-a23b-0f133137a308"). InnerVolumeSpecName "registration-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.578941    1493 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-socket-dir" (OuterVolumeSpecName: "socket-dir") pod "d115f93c-0057-442b-a23b-0f133137a308" (UID: "d115f93c-0057-442b-a23b-0f133137a308"). InnerVolumeSpecName "socket-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.580674    1493 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/733e4bea-425f-4afe-87fa-d2f0b73eab1d-kube-api-access-dm2rk" (OuterVolumeSpecName: "kube-api-access-dm2rk") pod "733e4bea-425f-4afe-87fa-d2f0b73eab1d" (UID: "733e4bea-425f-4afe-87fa-d2f0b73eab1d"). InnerVolumeSpecName "kube-api-access-dm2rk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.581476    1493 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d115f93c-0057-442b-a23b-0f133137a308-kube-api-access-zskx4" (OuterVolumeSpecName: "kube-api-access-zskx4") pod "d115f93c-0057-442b-a23b-0f133137a308" (UID: "d115f93c-0057-442b-a23b-0f133137a308"). InnerVolumeSpecName "kube-api-access-zskx4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.682748    1493 reconciler_common.go:300] "Volume detached for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-mountpoint-dir\") on node \"addons-639618\" DevicePath \"\""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.682779    1493 reconciler_common.go:300] "Volume detached for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-plugins-dir\") on node \"addons-639618\" DevicePath \"\""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.682792    1493 reconciler_common.go:300] "Volume detached for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-registration-dir\") on node \"addons-639618\" DevicePath \"\""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.682803    1493 reconciler_common.go:300] "Volume detached for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-socket-dir\") on node \"addons-639618\" DevicePath \"\""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.682816    1493 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dm2rk\" (UniqueName: \"kubernetes.io/projected/733e4bea-425f-4afe-87fa-d2f0b73eab1d-kube-api-access-dm2rk\") on node \"addons-639618\" DevicePath \"\""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.682829    1493 reconciler_common.go:300] "Volume detached for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-csi-data-dir\") on node \"addons-639618\" DevicePath \"\""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.682840    1493 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zskx4\" (UniqueName: \"kubernetes.io/projected/d115f93c-0057-442b-a23b-0f133137a308-kube-api-access-zskx4\") on node \"addons-639618\" DevicePath \"\""
	Mar 15 07:04:23 addons-639618 kubelet[1493]: I0315 07:04:23.682877    1493 reconciler_common.go:300] "Volume detached for volume \"dev-dir\" (UniqueName: \"kubernetes.io/host-path/d115f93c-0057-442b-a23b-0f133137a308-dev-dir\") on node \"addons-639618\" DevicePath \"\""
	
	
	==> storage-provisioner [3cac7a511f26e4245e68a3704d21457f754ec09765d28dc3e07ed2a6ac0fbb18] <==
	I0315 07:02:24.843390       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0315 07:02:24.864167       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0315 07:02:24.864240       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0315 07:02:24.874293       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0315 07:02:24.876186       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-639618_538da1a4-6b30-4c4a-9e7e-88e42d9e8f3e!
	I0315 07:02:24.877593       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a7c78d54-7e2b-4a35-93cb-110d0c62a4e3", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-639618_538da1a4-6b30-4c4a-9e7e-88e42d9e8f3e became leader
	I0315 07:02:24.976702       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-639618_538da1a4-6b30-4c4a-9e7e-88e42d9e8f3e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-639618 -n addons-639618
helpers_test.go:261: (dbg) Run:  kubectl --context addons-639618 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: csi-hostpath-resizer-0 csi-hostpathplugin-jnlcm
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-639618 describe pod csi-hostpath-resizer-0 csi-hostpathplugin-jnlcm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-639618 describe pod csi-hostpath-resizer-0 csi-hostpathplugin-jnlcm: exit status 1 (118.683213ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "csi-hostpath-resizer-0" not found
	Error from server (NotFound): pods "csi-hostpathplugin-jnlcm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-639618 describe pod csi-hostpath-resizer-0 csi-hostpathplugin-jnlcm: exit status 1
--- FAIL: TestAddons/parallel/Ingress (38.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image load --daemon gcr.io/google-containers/addon-resizer:functional-757678 --alsologtostderr
2024/03/15 07:10:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-757678 image load --daemon gcr.io/google-containers/addon-resizer:functional-757678 --alsologtostderr: (4.270090154s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-757678" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image load --daemon gcr.io/google-containers/addon-resizer:functional-757678 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-757678 image load --daemon gcr.io/google-containers/addon-resizer:functional-757678 --alsologtostderr: (3.831104764s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-757678" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.601021066s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-757678
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image load --daemon gcr.io/google-containers/addon-resizer:functional-757678 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-757678 image load --daemon gcr.io/google-containers/addon-resizer:functional-757678 --alsologtostderr: (3.146508978s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-757678" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image save gcr.io/google-containers/addon-resizer:functional-757678 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0315 07:10:31.358849 3333913 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:10:31.359517 3333913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:10:31.359533 3333913 out.go:304] Setting ErrFile to fd 2...
	I0315 07:10:31.359540 3333913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:10:31.359835 3333913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
	I0315 07:10:31.360494 3333913 config.go:182] Loaded profile config "functional-757678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0315 07:10:31.360658 3333913 config.go:182] Loaded profile config "functional-757678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0315 07:10:31.361183 3333913 cli_runner.go:164] Run: docker container inspect functional-757678 --format={{.State.Status}}
	I0315 07:10:31.381938 3333913 ssh_runner.go:195] Run: systemctl --version
	I0315 07:10:31.382034 3333913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-757678
	I0315 07:10:31.398034 3333913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36695 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/functional-757678/id_rsa Username:docker}
	I0315 07:10:31.492093 3333913 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0315 07:10:31.492172 3333913 cache_images.go:254] Failed to load cached images for profile functional-757678. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0315 07:10:31.492198 3333913 cache_images.go:262] succeeded pushing to: 
	I0315 07:10:31.492205 3333913 cache_images.go:263] failed pushing to: functional-757678

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (373.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-591842 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-591842 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m10.305554022s)

                                                
                                                
-- stdout --
	* [old-k8s-version-591842] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-591842" primary control-plane node in "old-k8s-version-591842" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Restarting existing docker container for "old-k8s-version-591842" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-591842 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 07:45:31.788037 3490722 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:45:31.788242 3490722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:45:31.788267 3490722 out.go:304] Setting ErrFile to fd 2...
	I0315 07:45:31.788285 3490722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:45:31.788590 3490722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
	I0315 07:45:31.788989 3490722 out.go:298] Setting JSON to false
	I0315 07:45:31.790118 3490722 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":59276,"bootTime":1710429456,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0315 07:45:31.790209 3490722 start.go:139] virtualization:  
	I0315 07:45:31.793143 3490722 out.go:177] * [old-k8s-version-591842] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0315 07:45:31.796058 3490722 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:45:31.797898 3490722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:45:31.796204 3490722 notify.go:220] Checking for updates...
	I0315 07:45:31.802946 3490722 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	I0315 07:45:31.805947 3490722 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	I0315 07:45:31.808541 3490722 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0315 07:45:31.810656 3490722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:45:31.813566 3490722 config.go:182] Loaded profile config "old-k8s-version-591842": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0315 07:45:31.816469 3490722 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0315 07:45:31.818840 3490722 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:45:31.849482 3490722 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0315 07:45:31.849644 3490722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:45:31.903126 3490722 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-15 07:45:31.894226267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:45:31.903229 3490722 docker.go:295] overlay module found
	I0315 07:45:31.905731 3490722 out.go:177] * Using the docker driver based on existing profile
	I0315 07:45:31.908180 3490722 start.go:297] selected driver: docker
	I0315 07:45:31.908210 3490722 start.go:901] validating driver "docker" against &{Name:old-k8s-version-591842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-591842 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:45:31.908319 3490722 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:45:31.908927 3490722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:45:31.960943 3490722 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-15 07:45:31.951244949 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:45:31.961283 3490722 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:45:31.961354 3490722 cni.go:84] Creating CNI manager for ""
	I0315 07:45:31.961373 3490722 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0315 07:45:31.961424 3490722 start.go:340] cluster config:
	{Name:old-k8s-version-591842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-591842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:45:31.969332 3490722 out.go:177] * Starting "old-k8s-version-591842" primary control-plane node in "old-k8s-version-591842" cluster
	I0315 07:45:31.971719 3490722 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0315 07:45:31.973881 3490722 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0315 07:45:31.976061 3490722 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0315 07:45:31.976110 3490722 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0315 07:45:31.976130 3490722 cache.go:56] Caching tarball of preloaded images
	I0315 07:45:31.976215 3490722 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0315 07:45:31.976232 3490722 preload.go:173] Found /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0315 07:45:31.976242 3490722 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0315 07:45:31.976355 3490722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/config.json ...
	I0315 07:45:31.991744 3490722 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0315 07:45:31.991769 3490722 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0315 07:45:31.991790 3490722 cache.go:194] Successfully downloaded all kic artifacts
	I0315 07:45:31.991831 3490722 start.go:360] acquireMachinesLock for old-k8s-version-591842: {Name:mk63b04c4dd89051346b19e9171d31d6d188f42c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:45:31.991906 3490722 start.go:364] duration metric: took 49.287µs to acquireMachinesLock for "old-k8s-version-591842"
	I0315 07:45:31.991933 3490722 start.go:96] Skipping create...Using existing machine configuration
	I0315 07:45:31.991939 3490722 fix.go:54] fixHost starting: 
	I0315 07:45:31.992200 3490722 cli_runner.go:164] Run: docker container inspect old-k8s-version-591842 --format={{.State.Status}}
	I0315 07:45:32.015340 3490722 fix.go:112] recreateIfNeeded on old-k8s-version-591842: state=Stopped err=<nil>
	W0315 07:45:32.015377 3490722 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 07:45:32.018244 3490722 out.go:177] * Restarting existing docker container for "old-k8s-version-591842" ...
	I0315 07:45:32.020480 3490722 cli_runner.go:164] Run: docker start old-k8s-version-591842
	I0315 07:45:32.369154 3490722 cli_runner.go:164] Run: docker container inspect old-k8s-version-591842 --format={{.State.Status}}
	I0315 07:45:32.391641 3490722 kic.go:430] container "old-k8s-version-591842" state is running.
	I0315 07:45:32.392049 3490722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-591842
	I0315 07:45:32.413426 3490722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/config.json ...
	I0315 07:45:32.413654 3490722 machine.go:94] provisionDockerMachine start ...
	I0315 07:45:32.413707 3490722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-591842
	I0315 07:45:32.435169 3490722 main.go:141] libmachine: Using SSH client type: native
	I0315 07:45:32.435451 3490722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36975 <nil> <nil>}
	I0315 07:45:32.435463 3490722 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:45:32.436003 3490722 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54058->127.0.0.1:36975: read: connection reset by peer
	I0315 07:45:35.582619 3490722 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-591842
	
	I0315 07:45:35.582649 3490722 ubuntu.go:169] provisioning hostname "old-k8s-version-591842"
	I0315 07:45:35.582715 3490722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-591842
	I0315 07:45:35.604496 3490722 main.go:141] libmachine: Using SSH client type: native
	I0315 07:45:35.604750 3490722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36975 <nil> <nil>}
	I0315 07:45:35.604767 3490722 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-591842 && echo "old-k8s-version-591842" | sudo tee /etc/hostname
	I0315 07:45:35.764476 3490722 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-591842
	
	I0315 07:45:35.764557 3490722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-591842
	I0315 07:45:35.782515 3490722 main.go:141] libmachine: Using SSH client type: native
	I0315 07:45:35.782767 3490722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36975 <nil> <nil>}
	I0315 07:45:35.782791 3490722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-591842' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-591842/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-591842' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:45:35.927182 3490722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:45:35.927210 3490722 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18213-3295134/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-3295134/.minikube}
	I0315 07:45:35.927235 3490722 ubuntu.go:177] setting up certificates
	I0315 07:45:35.927244 3490722 provision.go:84] configureAuth start
	I0315 07:45:35.927308 3490722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-591842
	I0315 07:45:35.947333 3490722 provision.go:143] copyHostCerts
	I0315 07:45:35.947410 3490722 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.pem, removing ...
	I0315 07:45:35.947431 3490722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.pem
	I0315 07:45:35.947505 3490722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.pem (1078 bytes)
	I0315 07:45:35.947607 3490722 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-3295134/.minikube/cert.pem, removing ...
	I0315 07:45:35.947618 3490722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-3295134/.minikube/cert.pem
	I0315 07:45:35.947646 3490722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-3295134/.minikube/cert.pem (1123 bytes)
	I0315 07:45:35.947708 3490722 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-3295134/.minikube/key.pem, removing ...
	I0315 07:45:35.947716 3490722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-3295134/.minikube/key.pem
	I0315 07:45:35.947743 3490722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-3295134/.minikube/key.pem (1679 bytes)
	I0315 07:45:35.947800 3490722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-591842 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-591842]
	I0315 07:45:36.732700 3490722 provision.go:177] copyRemoteCerts
	I0315 07:45:36.732773 3490722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:45:36.732819 3490722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-591842
	I0315 07:45:36.750039 3490722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36975 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/old-k8s-version-591842/id_rsa Username:docker}
	I0315 07:45:36.852024 3490722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 07:45:36.879033 3490722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:45:36.903402 3490722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0315 07:45:36.929005 3490722 provision.go:87] duration metric: took 1.001725871s to configureAuth
	I0315 07:45:36.929037 3490722 ubuntu.go:193] setting minikube options for container-runtime
	I0315 07:45:36.929229 3490722 config.go:182] Loaded profile config "old-k8s-version-591842": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0315 07:45:36.929240 3490722 machine.go:97] duration metric: took 4.515578003s to provisionDockerMachine
	I0315 07:45:36.929248 3490722 start.go:293] postStartSetup for "old-k8s-version-591842" (driver="docker")
	I0315 07:45:36.929258 3490722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:45:36.929319 3490722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:45:36.929374 3490722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-591842
	I0315 07:45:36.947821 3490722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36975 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/old-k8s-version-591842/id_rsa Username:docker}
	I0315 07:45:37.053905 3490722 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:45:37.057162 3490722 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0315 07:45:37.057200 3490722 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0315 07:45:37.057211 3490722 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0315 07:45:37.057217 3490722 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0315 07:45:37.057227 3490722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-3295134/.minikube/addons for local assets ...
	I0315 07:45:37.057291 3490722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-3295134/.minikube/files for local assets ...
	I0315 07:45:37.057378 3490722 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-3295134/.minikube/files/etc/ssl/certs/33005502.pem -> 33005502.pem in /etc/ssl/certs
	I0315 07:45:37.057528 3490722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:45:37.067864 3490722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/files/etc/ssl/certs/33005502.pem --> /etc/ssl/certs/33005502.pem (1708 bytes)
	I0315 07:45:37.093029 3490722 start.go:296] duration metric: took 163.765321ms for postStartSetup
	I0315 07:45:37.093115 3490722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 07:45:37.093166 3490722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-591842
	I0315 07:45:37.109205 3490722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36975 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/old-k8s-version-591842/id_rsa Username:docker}
	I0315 07:45:37.205207 3490722 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0315 07:45:37.209662 3490722 fix.go:56] duration metric: took 5.217715301s for fixHost
	I0315 07:45:37.209687 3490722 start.go:83] releasing machines lock for "old-k8s-version-591842", held for 5.217766064s
	I0315 07:45:37.209773 3490722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-591842
	I0315 07:45:37.226805 3490722 ssh_runner.go:195] Run: cat /version.json
	I0315 07:45:37.226855 3490722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-591842
	I0315 07:45:37.227132 3490722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:45:37.227279 3490722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-591842
	I0315 07:45:37.244599 3490722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36975 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/old-k8s-version-591842/id_rsa Username:docker}
	I0315 07:45:37.254851 3490722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36975 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/old-k8s-version-591842/id_rsa Username:docker}
	I0315 07:45:37.342648 3490722 ssh_runner.go:195] Run: systemctl --version
	I0315 07:45:37.499541 3490722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0315 07:45:37.503980 3490722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0315 07:45:37.523064 3490722 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0315 07:45:37.523199 3490722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:45:37.533446 3490722 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 07:45:37.533470 3490722 start.go:494] detecting cgroup driver to use...
	I0315 07:45:37.533522 3490722 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0315 07:45:37.533593 3490722 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0315 07:45:37.553752 3490722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0315 07:45:37.568306 3490722 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:45:37.568392 3490722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:45:37.581701 3490722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:45:37.593421 3490722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:45:37.732633 3490722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:45:37.856770 3490722 docker.go:233] disabling docker service ...
	I0315 07:45:37.856886 3490722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:45:37.871239 3490722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:45:37.883022 3490722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:45:37.983919 3490722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:45:38.089870 3490722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:45:38.105175 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:45:38.126181 3490722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0315 07:45:38.140438 3490722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0315 07:45:38.153274 3490722 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0315 07:45:38.153347 3490722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0315 07:45:38.167814 3490722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0315 07:45:38.179428 3490722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0315 07:45:38.193508 3490722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0315 07:45:38.204454 3490722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:45:38.221203 3490722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0315 07:45:38.235203 3490722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:45:38.245306 3490722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:45:38.254189 3490722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:45:38.349954 3490722 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0315 07:45:38.528206 3490722 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0315 07:45:38.528306 3490722 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0315 07:45:38.534868 3490722 start.go:562] Will wait 60s for crictl version
	I0315 07:45:38.534966 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:45:38.538856 3490722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:45:38.589746 3490722 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0315 07:45:38.589862 3490722 ssh_runner.go:195] Run: containerd --version
	I0315 07:45:38.612328 3490722 ssh_runner.go:195] Run: containerd --version
	I0315 07:45:38.640217 3490722 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	I0315 07:45:38.642327 3490722 cli_runner.go:164] Run: docker network inspect old-k8s-version-591842 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0315 07:45:38.663248 3490722 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0315 07:45:38.667822 3490722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:45:38.680408 3490722 kubeadm.go:877] updating cluster {Name:old-k8s-version-591842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-591842 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:45:38.680540 3490722 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0315 07:45:38.680599 3490722 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:45:38.744940 3490722 containerd.go:612] all images are preloaded for containerd runtime.
	I0315 07:45:38.744966 3490722 containerd.go:519] Images already preloaded, skipping extraction
	I0315 07:45:38.745032 3490722 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:45:38.787393 3490722 containerd.go:612] all images are preloaded for containerd runtime.
	I0315 07:45:38.787417 3490722 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:45:38.787426 3490722 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0315 07:45:38.787543 3490722 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-591842 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-591842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:45:38.787618 3490722 ssh_runner.go:195] Run: sudo crictl info
	I0315 07:45:38.830926 3490722 cni.go:84] Creating CNI manager for ""
	I0315 07:45:38.830950 3490722 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0315 07:45:38.830959 3490722 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:45:38.830981 3490722 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-591842 NodeName:old-k8s-version-591842 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0315 07:45:38.831146 3490722 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-591842"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:45:38.831224 3490722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0315 07:45:38.840824 3490722 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:45:38.840902 3490722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:45:38.850055 3490722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0315 07:45:38.870210 3490722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:45:38.888301 3490722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0315 07:45:38.906789 3490722 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0315 07:45:38.910183 3490722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:45:38.921449 3490722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:45:39.008333 3490722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:45:39.029971 3490722 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842 for IP: 192.168.76.2
	I0315 07:45:39.030051 3490722 certs.go:194] generating shared ca certs ...
	I0315 07:45:39.030107 3490722 certs.go:226] acquiring lock for ca certs: {Name:mk9abb58e338d3f021292a49b0c7ea22df42932a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:45:39.030288 3490722 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.key
	I0315 07:45:39.030361 3490722 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/proxy-client-ca.key
	I0315 07:45:39.030400 3490722 certs.go:256] generating profile certs ...
	I0315 07:45:39.030574 3490722 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.key
	I0315 07:45:39.030696 3490722 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/apiserver.key.df0d5480
	I0315 07:45:39.030778 3490722 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/proxy-client.key
	I0315 07:45:39.030956 3490722 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/3300550.pem (1338 bytes)
	W0315 07:45:39.031016 3490722 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/3300550_empty.pem, impossibly tiny 0 bytes
	I0315 07:45:39.031042 3490722 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca-key.pem (1675 bytes)
	I0315 07:45:39.031156 3490722 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:45:39.031216 3490722 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:45:39.031277 3490722 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/key.pem (1679 bytes)
	I0315 07:45:39.031394 3490722 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/files/etc/ssl/certs/33005502.pem (1708 bytes)
	I0315 07:45:39.032291 3490722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:45:39.063184 3490722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0315 07:45:39.095832 3490722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:45:39.147712 3490722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:45:39.206776 3490722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0315 07:45:39.255146 3490722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 07:45:39.282312 3490722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:45:39.307866 3490722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:45:39.357123 3490722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/files/etc/ssl/certs/33005502.pem --> /usr/share/ca-certificates/33005502.pem (1708 bytes)
	I0315 07:45:39.395381 3490722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:45:39.422108 3490722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/3300550.pem --> /usr/share/ca-certificates/3300550.pem (1338 bytes)
	I0315 07:45:39.452086 3490722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:45:39.474930 3490722 ssh_runner.go:195] Run: openssl version
	I0315 07:45:39.483243 3490722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33005502.pem && ln -fs /usr/share/ca-certificates/33005502.pem /etc/ssl/certs/33005502.pem"
	I0315 07:45:39.498298 3490722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33005502.pem
	I0315 07:45:39.502110 3490722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 07:07 /usr/share/ca-certificates/33005502.pem
	I0315 07:45:39.502177 3490722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33005502.pem
	I0315 07:45:39.509810 3490722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33005502.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:45:39.520714 3490722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:45:39.531541 3490722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:45:39.535904 3490722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 07:01 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:45:39.535990 3490722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:45:39.545789 3490722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:45:39.555744 3490722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3300550.pem && ln -fs /usr/share/ca-certificates/3300550.pem /etc/ssl/certs/3300550.pem"
	I0315 07:45:39.565253 3490722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3300550.pem
	I0315 07:45:39.570419 3490722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 07:07 /usr/share/ca-certificates/3300550.pem
	I0315 07:45:39.570524 3490722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3300550.pem
	I0315 07:45:39.577981 3490722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3300550.pem /etc/ssl/certs/51391683.0"
	I0315 07:45:39.586905 3490722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:45:39.590804 3490722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 07:45:39.598122 3490722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 07:45:39.604865 3490722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 07:45:39.611434 3490722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 07:45:39.618501 3490722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 07:45:39.630953 3490722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 07:45:39.639138 3490722 kubeadm.go:391] StartCluster: {Name:old-k8s-version-591842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-591842 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:45:39.639279 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0315 07:45:39.639366 3490722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:45:39.694822 3490722 cri.go:89] found id: "f55d7b6eb2f971daafdac9c03f63d864ce34ba3d3c98bc693596d24c7eb631b5"
	I0315 07:45:39.694889 3490722 cri.go:89] found id: "7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc"
	I0315 07:45:39.694909 3490722 cri.go:89] found id: "1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439"
	I0315 07:45:39.694937 3490722 cri.go:89] found id: "a957dd8724eabeb8d00b83d208e2da1998096377aba22d8b667ceb5b0e5bbfaa"
	I0315 07:45:39.694957 3490722 cri.go:89] found id: "7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e"
	I0315 07:45:39.694975 3490722 cri.go:89] found id: "6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d"
	I0315 07:45:39.694994 3490722 cri.go:89] found id: "eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d"
	I0315 07:45:39.695018 3490722 cri.go:89] found id: "aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5"
	I0315 07:45:39.695036 3490722 cri.go:89] found id: "288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf"
	I0315 07:45:39.695054 3490722 cri.go:89] found id: ""
	I0315 07:45:39.695139 3490722 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0315 07:45:39.708696 3490722 cri.go:116] JSON = null
	W0315 07:45:39.708802 3490722 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 9
	I0315 07:45:39.708893 3490722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0315 07:45:39.719361 3490722 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0315 07:45:39.719384 3490722 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0315 07:45:39.719390 3490722 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0315 07:45:39.719439 3490722 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 07:45:39.729467 3490722 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 07:45:39.729905 3490722 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-591842" does not appear in /home/jenkins/minikube-integration/18213-3295134/kubeconfig
	I0315 07:45:39.730015 3490722 kubeconfig.go:62] /home/jenkins/minikube-integration/18213-3295134/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-591842" cluster setting kubeconfig missing "old-k8s-version-591842" context setting]
	I0315 07:45:39.730313 3490722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/kubeconfig: {Name:mka8a8bb165c8233f51f8705aa64be6997cc72a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:45:39.731585 3490722 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 07:45:39.742859 3490722 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0315 07:45:39.742895 3490722 kubeadm.go:591] duration metric: took 23.499227ms to restartPrimaryControlPlane
	I0315 07:45:39.742904 3490722 kubeadm.go:393] duration metric: took 103.77824ms to StartCluster
	I0315 07:45:39.742919 3490722 settings.go:142] acquiring lock: {Name:mk9341f71218475f44486dc55acbce7236fa3a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:45:39.742979 3490722 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18213-3295134/kubeconfig
	I0315 07:45:39.743619 3490722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/kubeconfig: {Name:mka8a8bb165c8233f51f8705aa64be6997cc72a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:45:39.743840 3490722 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0315 07:45:39.746678 3490722 out.go:177] * Verifying Kubernetes components...
	I0315 07:45:39.744145 3490722 config.go:182] Loaded profile config "old-k8s-version-591842": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0315 07:45:39.744163 3490722 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 07:45:39.748762 3490722 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-591842"
	I0315 07:45:39.748798 3490722 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-591842"
	W0315 07:45:39.748810 3490722 addons.go:243] addon storage-provisioner should already be in state true
	I0315 07:45:39.748843 3490722 host.go:66] Checking if "old-k8s-version-591842" exists ...
	I0315 07:45:39.749280 3490722 cli_runner.go:164] Run: docker container inspect old-k8s-version-591842 --format={{.State.Status}}
	I0315 07:45:39.749420 3490722 addons.go:69] Setting dashboard=true in profile "old-k8s-version-591842"
	I0315 07:45:39.749445 3490722 addons.go:234] Setting addon dashboard=true in "old-k8s-version-591842"
	W0315 07:45:39.749455 3490722 addons.go:243] addon dashboard should already be in state true
	I0315 07:45:39.749483 3490722 host.go:66] Checking if "old-k8s-version-591842" exists ...
	I0315 07:45:39.749842 3490722 cli_runner.go:164] Run: docker container inspect old-k8s-version-591842 --format={{.State.Status}}
	I0315 07:45:39.750141 3490722 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-591842"
	I0315 07:45:39.750172 3490722 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-591842"
	I0315 07:45:39.750424 3490722 cli_runner.go:164] Run: docker container inspect old-k8s-version-591842 --format={{.State.Status}}
	I0315 07:45:39.750681 3490722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:45:39.750942 3490722 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-591842"
	I0315 07:45:39.750971 3490722 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-591842"
	W0315 07:45:39.750979 3490722 addons.go:243] addon metrics-server should already be in state true
	I0315 07:45:39.751004 3490722 host.go:66] Checking if "old-k8s-version-591842" exists ...
	I0315 07:45:39.751413 3490722 cli_runner.go:164] Run: docker container inspect old-k8s-version-591842 --format={{.State.Status}}
	I0315 07:45:39.823674 3490722 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 07:45:39.825627 3490722 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:45:39.827500 3490722 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0315 07:45:39.825667 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 07:45:39.829426 3490722 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0315 07:45:39.827582 3490722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-591842
	I0315 07:45:39.831495 3490722 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0315 07:45:39.831527 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0315 07:45:39.831587 3490722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-591842
	I0315 07:45:39.839352 3490722 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0315 07:45:39.841483 3490722 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 07:45:39.841504 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 07:45:39.841581 3490722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-591842
	I0315 07:45:39.853521 3490722 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-591842"
	W0315 07:45:39.853543 3490722 addons.go:243] addon default-storageclass should already be in state true
	I0315 07:45:39.853570 3490722 host.go:66] Checking if "old-k8s-version-591842" exists ...
	I0315 07:45:39.853963 3490722 cli_runner.go:164] Run: docker container inspect old-k8s-version-591842 --format={{.State.Status}}
	I0315 07:45:39.895229 3490722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36975 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/old-k8s-version-591842/id_rsa Username:docker}
	I0315 07:45:39.923095 3490722 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 07:45:39.923116 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 07:45:39.923174 3490722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-591842
	I0315 07:45:39.927223 3490722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36975 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/old-k8s-version-591842/id_rsa Username:docker}
	I0315 07:45:39.936021 3490722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36975 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/old-k8s-version-591842/id_rsa Username:docker}
	I0315 07:45:39.956852 3490722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36975 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/old-k8s-version-591842/id_rsa Username:docker}
	I0315 07:45:40.008007 3490722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:45:40.056764 3490722 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-591842" to be "Ready" ...
	I0315 07:45:40.071430 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:45:40.150435 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:45:40.190030 3490722 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0315 07:45:40.190113 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0315 07:45:40.231798 3490722 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 07:45:40.231873 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0315 07:45:40.275767 3490722 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0315 07:45:40.275790 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0315 07:45:40.319610 3490722 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 07:45:40.319635 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	W0315 07:45:40.356109 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:40.356145 3490722 retry.go:31] will retry after 295.704529ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:40.402677 3490722 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:45:40.402704 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 07:45:40.403009 3490722 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0315 07:45:40.403025 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0315 07:45:40.465816 3490722 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0315 07:45:40.465835 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0315 07:45:40.478364 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0315 07:45:40.484358 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:40.484433 3490722 retry.go:31] will retry after 338.188739ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:40.534778 3490722 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0315 07:45:40.534857 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0315 07:45:40.565132 3490722 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0315 07:45:40.565210 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0315 07:45:40.605074 3490722 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0315 07:45:40.605146 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0315 07:45:40.652407 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:45:40.694644 3490722 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0315 07:45:40.694716 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0315 07:45:40.750978 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:40.751068 3490722 retry.go:31] will retry after 306.212544ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:40.779952 3490722 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0315 07:45:40.780023 3490722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W0315 07:45:40.817048 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:40.817134 3490722 retry.go:31] will retry after 493.270705ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:40.823224 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:45:40.834462 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0315 07:45:41.017380 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:41.017456 3490722 retry.go:31] will retry after 282.135277ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0315 07:45:41.017570 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:41.017608 3490722 retry.go:31] will retry after 214.229924ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:41.059318 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:45:41.231989 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0315 07:45:41.242716 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:41.242752 3490722 retry.go:31] will retry after 370.832184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:41.300032 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:45:41.311312 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0315 07:45:41.503700 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:41.503735 3490722 retry.go:31] will retry after 382.859403ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0315 07:45:41.573797 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:41.573832 3490722 retry.go:31] will retry after 421.507753ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0315 07:45:41.597568 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:41.597596 3490722 retry.go:31] will retry after 769.983218ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:41.613908 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0315 07:45:41.744066 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:41.744096 3490722 retry.go:31] will retry after 759.762667ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:41.886772 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0315 07:45:41.996261 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0315 07:45:42.019053 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:42.019156 3490722 retry.go:31] will retry after 650.163603ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:42.058807 3490722 node_ready.go:53] error getting node "old-k8s-version-591842": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-591842": dial tcp 192.168.76.2:8443: connect: connection refused
	W0315 07:45:42.157339 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:42.157375 3490722 retry.go:31] will retry after 881.20092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:42.367744 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0315 07:45:42.475343 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:42.475393 3490722 retry.go:31] will retry after 586.68407ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:42.504685 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0315 07:45:42.632293 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:42.632373 3490722 retry.go:31] will retry after 787.041708ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:42.670419 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0315 07:45:42.808605 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:42.808684 3490722 retry.go:31] will retry after 1.016403314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:43.038796 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:45:43.062569 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0315 07:45:43.253111 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:43.253140 3490722 retry.go:31] will retry after 734.956909ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0315 07:45:43.268517 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:43.268546 3490722 retry.go:31] will retry after 1.198707939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:43.419783 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0315 07:45:43.572010 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:43.572093 3490722 retry.go:31] will retry after 1.030643618s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:43.825349 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0315 07:45:43.914939 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:43.914981 3490722 retry.go:31] will retry after 1.593274041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:43.989165 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0315 07:45:44.079597 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:44.079693 3490722 retry.go:31] will retry after 1.255103362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:44.468126 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0315 07:45:44.544751 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:44.544783 3490722 retry.go:31] will retry after 1.986996585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:44.558294 3490722 node_ready.go:53] error getting node "old-k8s-version-591842": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-591842": dial tcp 192.168.76.2:8443: connect: connection refused
	I0315 07:45:44.603560 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0315 07:45:44.685563 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:44.685645 3490722 retry.go:31] will retry after 2.532501055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:45.335058 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0315 07:45:45.427626 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:45.427686 3490722 retry.go:31] will retry after 1.730729892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:45.508997 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0315 07:45:45.585805 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:45.585840 3490722 retry.go:31] will retry after 2.386594212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:46.532002 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:45:46.558575 3490722 node_ready.go:53] error getting node "old-k8s-version-591842": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-591842": dial tcp 192.168.76.2:8443: connect: connection refused
	W0315 07:45:46.612477 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:46.612504 3490722 retry.go:31] will retry after 3.523071336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:47.158904 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:45:47.218334 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0315 07:45:47.230127 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:47.230168 3490722 retry.go:31] will retry after 5.073297119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0315 07:45:47.294189 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:47.294220 3490722 retry.go:31] will retry after 3.03442375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:47.973432 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0315 07:45:48.076816 3490722 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:48.076858 3490722 retry.go:31] will retry after 4.031308819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0315 07:45:50.136270 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 07:45:50.329440 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 07:45:52.108772 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0315 07:45:52.304274 3490722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0315 07:45:56.324907 3490722 node_ready.go:49] node "old-k8s-version-591842" has status "Ready":"True"
	I0315 07:45:56.324939 3490722 node_ready.go:38] duration metric: took 16.267648297s for node "old-k8s-version-591842" to be "Ready" ...
	I0315 07:45:56.324950 3490722 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:45:56.439554 3490722 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-5zhc9" in "kube-system" namespace to be "Ready" ...
	I0315 07:45:56.580888 3490722 pod_ready.go:92] pod "coredns-74ff55c5b-5zhc9" in "kube-system" namespace has status "Ready":"True"
	I0315 07:45:56.580908 3490722 pod_ready.go:81] duration metric: took 141.32728ms for pod "coredns-74ff55c5b-5zhc9" in "kube-system" namespace to be "Ready" ...
	I0315 07:45:56.580920 3490722 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-591842" in "kube-system" namespace to be "Ready" ...
	I0315 07:45:57.445352 3490722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.309045421s)
	I0315 07:45:57.445455 3490722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.115989509s)
	I0315 07:45:57.445468 3490722 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-591842"
	I0315 07:45:57.592288 3490722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.483462834s)
	I0315 07:45:57.594766 3490722 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-591842 addons enable metrics-server
	
	I0315 07:45:57.592445 3490722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (5.288144871s)
	I0315 07:45:57.607531 3490722 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0315 07:45:57.609525 3490722 addons.go:505] duration metric: took 17.865363033s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0315 07:45:58.587630 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:00.593736 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:03.088532 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:05.092964 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:07.587661 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:09.588704 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:12.089068 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:14.588202 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:17.087714 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:19.590875 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:21.595824 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:24.089505 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:26.586843 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:28.587590 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:31.096810 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:33.586723 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:35.587751 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:37.609272 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:40.088641 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:42.089851 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:44.586855 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:46.587063 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:48.588260 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:51.091727 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:53.588199 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:56.088829 3490722 pod_ready.go:102] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:46:58.086564 3490722 pod_ready.go:92] pod "etcd-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"True"
	I0315 07:46:58.086592 3490722 pod_ready.go:81] duration metric: took 1m1.505664769s for pod "etcd-old-k8s-version-591842" in "kube-system" namespace to be "Ready" ...
	I0315 07:46:58.086608 3490722 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-591842" in "kube-system" namespace to be "Ready" ...
	I0315 07:46:58.092323 3490722 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"True"
	I0315 07:46:58.092399 3490722 pod_ready.go:81] duration metric: took 5.78233ms for pod "kube-apiserver-old-k8s-version-591842" in "kube-system" namespace to be "Ready" ...
	I0315 07:46:58.092430 3490722 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-591842" in "kube-system" namespace to be "Ready" ...
	I0315 07:47:00.132617 3490722 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"True"
	I0315 07:47:00.132695 3490722 pod_ready.go:81] duration metric: took 2.040241761s for pod "kube-controller-manager-old-k8s-version-591842" in "kube-system" namespace to be "Ready" ...
	I0315 07:47:00.132726 3490722 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pdn2n" in "kube-system" namespace to be "Ready" ...
	I0315 07:47:00.165121 3490722 pod_ready.go:92] pod "kube-proxy-pdn2n" in "kube-system" namespace has status "Ready":"True"
	I0315 07:47:00.165167 3490722 pod_ready.go:81] duration metric: took 32.412512ms for pod "kube-proxy-pdn2n" in "kube-system" namespace to be "Ready" ...
	I0315 07:47:00.165183 3490722 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-591842" in "kube-system" namespace to be "Ready" ...
	I0315 07:47:02.172416 3490722 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:04.671598 3490722 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:06.671859 3490722 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:09.171153 3490722 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:11.172141 3490722 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:13.172616 3490722 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:15.174436 3490722 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:16.670860 3490722 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-591842" in "kube-system" namespace has status "Ready":"True"
	I0315 07:47:16.670887 3490722 pod_ready.go:81] duration metric: took 16.505696169s for pod "kube-scheduler-old-k8s-version-591842" in "kube-system" namespace to be "Ready" ...
	I0315 07:47:16.670899 3490722 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace to be "Ready" ...
	I0315 07:47:18.676148 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:20.677149 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:23.177510 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:25.178450 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:27.180486 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:29.681585 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:32.177892 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:34.677091 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:36.677208 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:39.176547 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:41.177100 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:43.678716 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:46.176844 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:48.677193 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:51.177733 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:53.178577 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:55.676892 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:47:57.677077 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:00.218163 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:02.677200 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:05.177821 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:07.676528 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:09.677363 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:12.176977 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:14.178047 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:16.676389 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:18.677490 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:21.177447 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:23.676806 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:26.176859 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:28.676490 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:30.677553 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:32.677687 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:35.177917 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:37.203184 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:39.677066 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:41.677450 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:43.680417 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:46.177075 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:48.178046 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:50.677480 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:53.176783 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:55.177798 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:57.178054 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:48:59.178669 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:01.677266 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:04.176806 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:06.176940 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:08.677227 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:11.177179 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:13.177464 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:15.178146 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:17.677076 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:20.177565 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:22.678851 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:25.178131 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:27.678624 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:30.177609 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:32.677675 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:34.678026 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:37.177468 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:39.677035 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:41.677587 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:43.677681 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:46.176790 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:48.179036 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:50.677102 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:52.677513 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:55.178196 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:57.677505 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:49:59.683771 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:02.177341 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:04.177710 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:06.676980 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:08.677377 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:11.176867 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:13.179158 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:15.676683 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:18.179596 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:20.677883 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:23.176574 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:25.177434 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:27.177893 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:29.678095 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:31.685790 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:34.176272 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:36.178511 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:38.676751 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:40.678288 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:43.177996 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:45.186438 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:47.678227 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:50.177996 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:52.677058 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:55.177803 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:50:57.676940 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:51:00.192744 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:51:02.678433 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:51:05.177402 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:51:07.686339 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:51:10.178061 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:51:12.677423 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:51:14.677510 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:51:16.677351 3490722 pod_ready.go:81] duration metric: took 4m0.006438222s for pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace to be "Ready" ...
	E0315 07:51:16.677377 3490722 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0315 07:51:16.677387 3490722 pod_ready.go:38] duration metric: took 5m20.352426308s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:51:16.677401 3490722 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:51:16.677430 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:51:16.677492 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:51:16.741347 3490722 cri.go:89] found id: "3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465"
	I0315 07:51:16.741365 3490722 cri.go:89] found id: "aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5"
	I0315 07:51:16.741370 3490722 cri.go:89] found id: ""
	I0315 07:51:16.741377 3490722 logs.go:276] 2 containers: [3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465 aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5]
	I0315 07:51:16.741429 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.745174 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.748785 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0315 07:51:16.748846 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:51:16.809837 3490722 cri.go:89] found id: "3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe"
	I0315 07:51:16.809901 3490722 cri.go:89] found id: "288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf"
	I0315 07:51:16.809917 3490722 cri.go:89] found id: ""
	I0315 07:51:16.809925 3490722 logs.go:276] 2 containers: [3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe 288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf]
	I0315 07:51:16.809988 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.821230 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.832434 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0315 07:51:16.832513 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:51:16.893799 3490722 cri.go:89] found id: "9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e"
	I0315 07:51:16.893823 3490722 cri.go:89] found id: "7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc"
	I0315 07:51:16.893828 3490722 cri.go:89] found id: ""
	I0315 07:51:16.893836 3490722 logs.go:276] 2 containers: [9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e 7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc]
	I0315 07:51:16.893889 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.898028 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.902368 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:51:16.902437 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:51:16.958613 3490722 cri.go:89] found id: "19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423"
	I0315 07:51:16.958636 3490722 cri.go:89] found id: "eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d"
	I0315 07:51:16.958642 3490722 cri.go:89] found id: ""
	I0315 07:51:16.958649 3490722 logs.go:276] 2 containers: [19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423 eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d]
	I0315 07:51:16.958703 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.962711 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.966605 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:51:16.966681 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:51:17.016946 3490722 cri.go:89] found id: "24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3"
	I0315 07:51:17.016970 3490722 cri.go:89] found id: "1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439"
	I0315 07:51:17.016976 3490722 cri.go:89] found id: ""
	I0315 07:51:17.016985 3490722 logs.go:276] 2 containers: [24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3 1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439]
	I0315 07:51:17.017041 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.020761 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.027026 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:51:17.027194 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:51:17.081451 3490722 cri.go:89] found id: "39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2"
	I0315 07:51:17.081475 3490722 cri.go:89] found id: "6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d"
	I0315 07:51:17.081480 3490722 cri.go:89] found id: ""
	I0315 07:51:17.081487 3490722 logs.go:276] 2 containers: [39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2 6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d]
	I0315 07:51:17.081544 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.085402 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.088913 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0315 07:51:17.088999 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:51:17.152035 3490722 cri.go:89] found id: "d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842"
	I0315 07:51:17.152107 3490722 cri.go:89] found id: "7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e"
	I0315 07:51:17.152125 3490722 cri.go:89] found id: ""
	I0315 07:51:17.152146 3490722 logs.go:276] 2 containers: [d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842 7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e]
	I0315 07:51:17.152229 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.156132 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.159619 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:51:17.159727 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:51:17.208743 3490722 cri.go:89] found id: "141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c"
	I0315 07:51:17.208812 3490722 cri.go:89] found id: ""
	I0315 07:51:17.208834 3490722 logs.go:276] 1 containers: [141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c]
	I0315 07:51:17.208916 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.212643 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:51:17.212756 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:51:17.276269 3490722 cri.go:89] found id: "ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803"
	I0315 07:51:17.276352 3490722 cri.go:89] found id: "4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2"
	I0315 07:51:17.276372 3490722 cri.go:89] found id: ""
	I0315 07:51:17.276394 3490722 logs.go:276] 2 containers: [ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803 4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2]
	I0315 07:51:17.276485 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.281157 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.285880 3490722 logs.go:123] Gathering logs for kube-apiserver [aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5] ...
	I0315 07:51:17.285975 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5"
	I0315 07:51:17.383966 3490722 logs.go:123] Gathering logs for etcd [3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe] ...
	I0315 07:51:17.383996 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe"
	I0315 07:51:17.452945 3490722 logs.go:123] Gathering logs for etcd [288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf] ...
	I0315 07:51:17.453085 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf"
	I0315 07:51:17.518825 3490722 logs.go:123] Gathering logs for kube-proxy [1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439] ...
	I0315 07:51:17.518972 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439"
	I0315 07:51:17.582975 3490722 logs.go:123] Gathering logs for kube-controller-manager [6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d] ...
	I0315 07:51:17.582999 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d"
	I0315 07:51:17.721400 3490722 logs.go:123] Gathering logs for kindnet [d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842] ...
	I0315 07:51:17.721438 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842"
	I0315 07:51:17.782289 3490722 logs.go:123] Gathering logs for kubelet ...
	I0315 07:51:17.782317 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0315 07:51:17.844998 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151553     663 reflector.go:138] object-"kube-system"/"metrics-server-token-h2gnq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-h2gnq" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.845272 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151674     663 reflector.go:138] object-"default"/"default-token-82cb9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-82cb9" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.845526 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151731     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.845762 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151795     663 reflector.go:138] object-"kube-system"/"coredns-token-vw5pl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-vw5pl" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.845999 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151873     663 reflector.go:138] object-"kube-system"/"kindnet-token-jrrqr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jrrqr" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.846238 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151952     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-vqzhx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-vqzhx" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.846467 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.152014     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.846715 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.152079     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-hnqzn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-hnqzn" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.857570 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:58 old-k8s-version-591842 kubelet[663]: E0315 07:45:58.854597     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:17.857857 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:59 old-k8s-version-591842 kubelet[663]: E0315 07:45:59.802434     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.861219 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:14 old-k8s-version-591842 kubelet[663]: E0315 07:46:14.653468     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:17.863383 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:20 old-k8s-version-591842 kubelet[663]: E0315 07:46:20.888372     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.863745 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:21 old-k8s-version-591842 kubelet[663]: E0315 07:46:21.893270     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.864106 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:22 old-k8s-version-591842 kubelet[663]: E0315 07:46:22.992818     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.864314 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:25 old-k8s-version-591842 kubelet[663]: E0315 07:46:25.647507     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.865123 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:30 old-k8s-version-591842 kubelet[663]: E0315 07:46:30.928822     663 pod_workers.go:191] Error syncing pod 1b02bdb3-5934-4002-980c-769d1de68357 ("storage-provisioner_kube-system(1b02bdb3-5934-4002-980c-769d1de68357)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1b02bdb3-5934-4002-980c-769d1de68357)"
	W0315 07:51:17.865704 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:35 old-k8s-version-591842 kubelet[663]: E0315 07:46:35.946453     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.868835 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:37 old-k8s-version-591842 kubelet[663]: E0315 07:46:37.659626     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:17.869661 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:42 old-k8s-version-591842 kubelet[663]: E0315 07:46:42.993141     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.869869 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:48 old-k8s-version-591842 kubelet[663]: E0315 07:46:48.651057     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.870811 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:58 old-k8s-version-591842 kubelet[663]: E0315 07:46:58.015895     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.871197 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:02 old-k8s-version-591842 kubelet[663]: E0315 07:47:02.992859     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.871475 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:03 old-k8s-version-591842 kubelet[663]: E0315 07:47:03.649062     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.871720 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:15 old-k8s-version-591842 kubelet[663]: E0315 07:47:15.644773     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.872103 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:16 old-k8s-version-591842 kubelet[663]: E0315 07:47:16.645024     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.874664 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:29 old-k8s-version-591842 kubelet[663]: E0315 07:47:29.656112     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:17.875037 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:30 old-k8s-version-591842 kubelet[663]: E0315 07:47:30.644347     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.875270 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:42 old-k8s-version-591842 kubelet[663]: E0315 07:47:42.645173     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.875894 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:45 old-k8s-version-591842 kubelet[663]: E0315 07:47:45.161682     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.876256 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:52 old-k8s-version-591842 kubelet[663]: E0315 07:47:52.992815     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.876463 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:54 old-k8s-version-591842 kubelet[663]: E0315 07:47:54.645803     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.876813 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:06 old-k8s-version-591842 kubelet[663]: E0315 07:48:06.644489     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.877020 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:06 old-k8s-version-591842 kubelet[663]: E0315 07:48:06.645236     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.877369 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:18 old-k8s-version-591842 kubelet[663]: E0315 07:48:18.644500     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.877577 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:18 old-k8s-version-591842 kubelet[663]: E0315 07:48:18.645259     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.877786 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:29 old-k8s-version-591842 kubelet[663]: E0315 07:48:29.644702     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.878133 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:33 old-k8s-version-591842 kubelet[663]: E0315 07:48:33.645925     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.878366 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:42 old-k8s-version-591842 kubelet[663]: E0315 07:48:42.644688     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.878717 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:45 old-k8s-version-591842 kubelet[663]: E0315 07:48:45.644849     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.881377 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:57 old-k8s-version-591842 kubelet[663]: E0315 07:48:57.654804     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:17.881741 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:59 old-k8s-version-591842 kubelet[663]: E0315 07:48:59.644341     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.881947 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:09 old-k8s-version-591842 kubelet[663]: E0315 07:49:09.653348     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.882559 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:14 old-k8s-version-591842 kubelet[663]: E0315 07:49:14.388467     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.882911 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:22 old-k8s-version-591842 kubelet[663]: E0315 07:49:22.993103     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.883162 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:23 old-k8s-version-591842 kubelet[663]: E0315 07:49:23.645937     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.883370 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:35 old-k8s-version-591842 kubelet[663]: E0315 07:49:35.645912     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.883715 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:36 old-k8s-version-591842 kubelet[663]: E0315 07:49:36.644319     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.883929 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:47 old-k8s-version-591842 kubelet[663]: E0315 07:49:47.658879     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.884276 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:50 old-k8s-version-591842 kubelet[663]: E0315 07:49:50.644297     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.884485 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:02 old-k8s-version-591842 kubelet[663]: E0315 07:50:02.644922     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.884840 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:05 old-k8s-version-591842 kubelet[663]: E0315 07:50:05.644906     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.885212 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:17 old-k8s-version-591842 kubelet[663]: E0315 07:50:17.645285     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.885484 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:17 old-k8s-version-591842 kubelet[663]: E0315 07:50:17.647267     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.889275 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:28 old-k8s-version-591842 kubelet[663]: E0315 07:50:28.644332     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.889531 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:29 old-k8s-version-591842 kubelet[663]: E0315 07:50:29.645326     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.889739 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:42 old-k8s-version-591842 kubelet[663]: E0315 07:50:42.644973     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.890100 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:43 old-k8s-version-591842 kubelet[663]: E0315 07:50:43.644572     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.890309 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.644836     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.890655 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.645785     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.891006 3490722 logs.go:138] Found kubelet problem: Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.645648     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.891245 3490722 logs.go:138] Found kubelet problem: Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.651347     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0315 07:51:17.891273 3490722 logs.go:123] Gathering logs for dmesg ...
	I0315 07:51:17.891305 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:51:17.929063 3490722 logs.go:123] Gathering logs for kindnet [7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e] ...
	I0315 07:51:17.929137 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e"
	I0315 07:51:17.993303 3490722 logs.go:123] Gathering logs for kube-scheduler [eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d] ...
	I0315 07:51:17.993327 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d"
	I0315 07:51:18.145483 3490722 logs.go:123] Gathering logs for container status ...
	I0315 07:51:18.145518 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:51:18.232250 3490722 logs.go:123] Gathering logs for kube-apiserver [3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465] ...
	I0315 07:51:18.232279 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465"
	I0315 07:51:18.307131 3490722 logs.go:123] Gathering logs for coredns [7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc] ...
	I0315 07:51:18.307162 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc"
	I0315 07:51:18.372534 3490722 logs.go:123] Gathering logs for kubernetes-dashboard [141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c] ...
	I0315 07:51:18.372563 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c"
	I0315 07:51:18.423802 3490722 logs.go:123] Gathering logs for coredns [9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e] ...
	I0315 07:51:18.423835 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e"
	I0315 07:51:18.484538 3490722 logs.go:123] Gathering logs for kube-controller-manager [39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2] ...
	I0315 07:51:18.484567 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2"
	I0315 07:51:18.734982 3490722 logs.go:123] Gathering logs for kube-proxy [24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3] ...
	I0315 07:51:18.735022 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3"
	I0315 07:51:18.989000 3490722 logs.go:123] Gathering logs for storage-provisioner [ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803] ...
	I0315 07:51:18.989036 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803"
	I0315 07:51:19.097844 3490722 logs.go:123] Gathering logs for storage-provisioner [4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2] ...
	I0315 07:51:19.097876 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2"
	I0315 07:51:19.152808 3490722 logs.go:123] Gathering logs for containerd ...
	I0315 07:51:19.152845 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0315 07:51:19.226776 3490722 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:51:19.226813 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:51:19.465101 3490722 logs.go:123] Gathering logs for kube-scheduler [19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423] ...
	I0315 07:51:19.465178 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423"
	I0315 07:51:19.520188 3490722 out.go:304] Setting ErrFile to fd 2...
	I0315 07:51:19.520213 3490722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0315 07:51:19.520268 3490722 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0315 07:51:19.520277 3490722 out.go:239]   Mar 15 07:50:43 old-k8s-version-591842 kubelet[663]: E0315 07:50:43.644572     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	  Mar 15 07:50:43 old-k8s-version-591842 kubelet[663]: E0315 07:50:43.644572     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:19.520285 3490722 out.go:239]   Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.644836     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.644836     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:19.520293 3490722 out.go:239]   Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.645785     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	  Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.645785     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:19.520302 3490722 out.go:239]   Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.645648     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	  Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.645648     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:19.520311 3490722 out.go:239]   Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.651347     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.651347     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0315 07:51:19.520322 3490722 out.go:304] Setting ErrFile to fd 2...
	I0315 07:51:19.520329 3490722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:51:29.521502 3490722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:51:29.538525 3490722 api_server.go:72] duration metric: took 5m49.79464477s to wait for apiserver process to appear ...
	I0315 07:51:29.538551 3490722 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:51:29.538596 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:51:29.538676 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:51:29.599859 3490722 cri.go:89] found id: "3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465"
	I0315 07:51:29.599884 3490722 cri.go:89] found id: "aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5"
	I0315 07:51:29.599889 3490722 cri.go:89] found id: ""
	I0315 07:51:29.599897 3490722 logs.go:276] 2 containers: [3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465 aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5]
	I0315 07:51:29.599971 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.605961 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.610920 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0315 07:51:29.611054 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:51:29.685499 3490722 cri.go:89] found id: "3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe"
	I0315 07:51:29.685526 3490722 cri.go:89] found id: "288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf"
	I0315 07:51:29.685530 3490722 cri.go:89] found id: ""
	I0315 07:51:29.685537 3490722 logs.go:276] 2 containers: [3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe 288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf]
	I0315 07:51:29.685610 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.690445 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.695366 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0315 07:51:29.695452 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:51:29.771248 3490722 cri.go:89] found id: "9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e"
	I0315 07:51:29.771267 3490722 cri.go:89] found id: "7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc"
	I0315 07:51:29.771272 3490722 cri.go:89] found id: ""
	I0315 07:51:29.771279 3490722 logs.go:276] 2 containers: [9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e 7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc]
	I0315 07:51:29.771337 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.776267 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.793698 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:51:29.793769 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:51:29.845789 3490722 cri.go:89] found id: "19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423"
	I0315 07:51:29.845851 3490722 cri.go:89] found id: "eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d"
	I0315 07:51:29.845878 3490722 cri.go:89] found id: ""
	I0315 07:51:29.845898 3490722 logs.go:276] 2 containers: [19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423 eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d]
	I0315 07:51:29.845985 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.851019 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.855662 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:51:29.855846 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:51:29.914978 3490722 cri.go:89] found id: "24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3"
	I0315 07:51:29.915050 3490722 cri.go:89] found id: "1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439"
	I0315 07:51:29.915095 3490722 cri.go:89] found id: ""
	I0315 07:51:29.915121 3490722 logs.go:276] 2 containers: [24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3 1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439]
	I0315 07:51:29.915205 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.923722 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.927822 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:51:29.927942 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:51:29.989735 3490722 cri.go:89] found id: "39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2"
	I0315 07:51:29.989797 3490722 cri.go:89] found id: "6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d"
	I0315 07:51:29.989825 3490722 cri.go:89] found id: ""
	I0315 07:51:29.989846 3490722 logs.go:276] 2 containers: [39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2 6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d]
	I0315 07:51:29.989933 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.994641 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.998628 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0315 07:51:29.998750 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:51:30.121952 3490722 cri.go:89] found id: "d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842"
	I0315 07:51:30.122032 3490722 cri.go:89] found id: "7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e"
	I0315 07:51:30.122054 3490722 cri.go:89] found id: ""
	I0315 07:51:30.122083 3490722 logs.go:276] 2 containers: [d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842 7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e]
	I0315 07:51:30.122196 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:30.127774 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:30.144595 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:51:30.144766 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:51:30.219866 3490722 cri.go:89] found id: "141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c"
	I0315 07:51:30.219939 3490722 cri.go:89] found id: ""
	I0315 07:51:30.219962 3490722 logs.go:276] 1 containers: [141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c]
	I0315 07:51:30.220076 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:30.226314 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:51:30.226500 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:51:30.285822 3490722 cri.go:89] found id: "ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803"
	I0315 07:51:30.285848 3490722 cri.go:89] found id: "4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2"
	I0315 07:51:30.285854 3490722 cri.go:89] found id: ""
	I0315 07:51:30.285862 3490722 logs.go:276] 2 containers: [ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803 4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2]
	I0315 07:51:30.285924 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:30.291346 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:30.296522 3490722 logs.go:123] Gathering logs for kubernetes-dashboard [141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c] ...
	I0315 07:51:30.296594 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c"
	I0315 07:51:30.367022 3490722 logs.go:123] Gathering logs for storage-provisioner [4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2] ...
	I0315 07:51:30.367147 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2"
	I0315 07:51:30.420538 3490722 logs.go:123] Gathering logs for containerd ...
	I0315 07:51:30.420567 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0315 07:51:30.496522 3490722 logs.go:123] Gathering logs for container status ...
	I0315 07:51:30.496650 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:51:30.579010 3490722 logs.go:123] Gathering logs for kube-scheduler [eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d] ...
	I0315 07:51:30.579049 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d"
	I0315 07:51:30.630592 3490722 logs.go:123] Gathering logs for kube-proxy [24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3] ...
	I0315 07:51:30.630626 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3"
	I0315 07:51:30.678109 3490722 logs.go:123] Gathering logs for kindnet [d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842] ...
	I0315 07:51:30.678138 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842"
	I0315 07:51:30.723399 3490722 logs.go:123] Gathering logs for kube-apiserver [aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5] ...
	I0315 07:51:30.723428 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5"
	I0315 07:51:30.797968 3490722 logs.go:123] Gathering logs for dmesg ...
	I0315 07:51:30.798005 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:51:30.816622 3490722 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:51:30.816653 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:51:30.972164 3490722 logs.go:123] Gathering logs for kube-apiserver [3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465] ...
	I0315 07:51:30.972197 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465"
	I0315 07:51:31.050833 3490722 logs.go:123] Gathering logs for kindnet [7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e] ...
	I0315 07:51:31.050875 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e"
	I0315 07:51:31.123735 3490722 logs.go:123] Gathering logs for storage-provisioner [ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803] ...
	I0315 07:51:31.123775 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803"
	I0315 07:51:31.190207 3490722 logs.go:123] Gathering logs for coredns [7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc] ...
	I0315 07:51:31.190236 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc"
	I0315 07:51:31.249826 3490722 logs.go:123] Gathering logs for kube-controller-manager [39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2] ...
	I0315 07:51:31.249855 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2"
	I0315 07:51:31.354117 3490722 logs.go:123] Gathering logs for kube-controller-manager [6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d] ...
	I0315 07:51:31.354155 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d"
	I0315 07:51:31.441541 3490722 logs.go:123] Gathering logs for coredns [9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e] ...
	I0315 07:51:31.441581 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e"
	I0315 07:51:31.515474 3490722 logs.go:123] Gathering logs for kube-scheduler [19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423] ...
	I0315 07:51:31.515503 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423"
	I0315 07:51:31.589337 3490722 logs.go:123] Gathering logs for kube-proxy [1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439] ...
	I0315 07:51:31.589367 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439"
	I0315 07:51:31.679207 3490722 logs.go:123] Gathering logs for kubelet ...
	I0315 07:51:31.679237 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0315 07:51:31.765777 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151553     663 reflector.go:138] object-"kube-system"/"metrics-server-token-h2gnq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-h2gnq" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.766009 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151674     663 reflector.go:138] object-"default"/"default-token-82cb9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-82cb9" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.766219 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151731     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.766444 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151795     663 reflector.go:138] object-"kube-system"/"coredns-token-vw5pl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-vw5pl" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.766666 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151873     663 reflector.go:138] object-"kube-system"/"kindnet-token-jrrqr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jrrqr" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.766881 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151952     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-vqzhx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-vqzhx" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.767126 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.152014     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.767352 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.152079     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-hnqzn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-hnqzn" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.777873 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:58 old-k8s-version-591842 kubelet[663]: E0315 07:45:58.854597     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:31.778081 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:59 old-k8s-version-591842 kubelet[663]: E0315 07:45:59.802434     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.784967 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:14 old-k8s-version-591842 kubelet[663]: E0315 07:46:14.653468     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:31.787050 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:20 old-k8s-version-591842 kubelet[663]: E0315 07:46:20.888372     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.788741 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:21 old-k8s-version-591842 kubelet[663]: E0315 07:46:21.893270     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.789087 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:22 old-k8s-version-591842 kubelet[663]: E0315 07:46:22.992818     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.789271 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:25 old-k8s-version-591842 kubelet[663]: E0315 07:46:25.647507     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.790036 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:30 old-k8s-version-591842 kubelet[663]: E0315 07:46:30.928822     663 pod_workers.go:191] Error syncing pod 1b02bdb3-5934-4002-980c-769d1de68357 ("storage-provisioner_kube-system(1b02bdb3-5934-4002-980c-769d1de68357)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1b02bdb3-5934-4002-980c-769d1de68357)"
	W0315 07:51:31.790614 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:35 old-k8s-version-591842 kubelet[663]: E0315 07:46:35.946453     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.793030 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:37 old-k8s-version-591842 kubelet[663]: E0315 07:46:37.659626     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:31.793817 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:42 old-k8s-version-591842 kubelet[663]: E0315 07:46:42.993141     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.794004 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:48 old-k8s-version-591842 kubelet[663]: E0315 07:46:48.651057     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.794583 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:58 old-k8s-version-591842 kubelet[663]: E0315 07:46:58.015895     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.794910 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:02 old-k8s-version-591842 kubelet[663]: E0315 07:47:02.992859     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.797506 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:03 old-k8s-version-591842 kubelet[663]: E0315 07:47:03.649062     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.800132 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:15 old-k8s-version-591842 kubelet[663]: E0315 07:47:15.644773     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.800463 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:16 old-k8s-version-591842 kubelet[663]: E0315 07:47:16.645024     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.802857 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:29 old-k8s-version-591842 kubelet[663]: E0315 07:47:29.656112     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:31.803187 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:30 old-k8s-version-591842 kubelet[663]: E0315 07:47:30.644347     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.803376 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:42 old-k8s-version-591842 kubelet[663]: E0315 07:47:42.645173     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.803959 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:45 old-k8s-version-591842 kubelet[663]: E0315 07:47:45.161682     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.804281 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:52 old-k8s-version-591842 kubelet[663]: E0315 07:47:52.992815     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.804463 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:54 old-k8s-version-591842 kubelet[663]: E0315 07:47:54.645803     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.804784 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:06 old-k8s-version-591842 kubelet[663]: E0315 07:48:06.644489     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.804966 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:06 old-k8s-version-591842 kubelet[663]: E0315 07:48:06.645236     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.805298 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:18 old-k8s-version-591842 kubelet[663]: E0315 07:48:18.644500     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.805483 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:18 old-k8s-version-591842 kubelet[663]: E0315 07:48:18.645259     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.805664 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:29 old-k8s-version-591842 kubelet[663]: E0315 07:48:29.644702     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.805987 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:33 old-k8s-version-591842 kubelet[663]: E0315 07:48:33.645925     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.806168 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:42 old-k8s-version-591842 kubelet[663]: E0315 07:48:42.644688     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.806493 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:45 old-k8s-version-591842 kubelet[663]: E0315 07:48:45.644849     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.808911 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:57 old-k8s-version-591842 kubelet[663]: E0315 07:48:57.654804     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:31.809237 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:59 old-k8s-version-591842 kubelet[663]: E0315 07:48:59.644341     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.809453 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:09 old-k8s-version-591842 kubelet[663]: E0315 07:49:09.653348     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.810032 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:14 old-k8s-version-591842 kubelet[663]: E0315 07:49:14.388467     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.810354 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:22 old-k8s-version-591842 kubelet[663]: E0315 07:49:22.993103     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.810536 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:23 old-k8s-version-591842 kubelet[663]: E0315 07:49:23.645937     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.810717 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:35 old-k8s-version-591842 kubelet[663]: E0315 07:49:35.645912     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.811038 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:36 old-k8s-version-591842 kubelet[663]: E0315 07:49:36.644319     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.813162 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:47 old-k8s-version-591842 kubelet[663]: E0315 07:49:47.658879     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.813501 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:50 old-k8s-version-591842 kubelet[663]: E0315 07:49:50.644297     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.813686 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:02 old-k8s-version-591842 kubelet[663]: E0315 07:50:02.644922     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.814013 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:05 old-k8s-version-591842 kubelet[663]: E0315 07:50:05.644906     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.814335 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:17 old-k8s-version-591842 kubelet[663]: E0315 07:50:17.645285     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.814518 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:17 old-k8s-version-591842 kubelet[663]: E0315 07:50:17.647267     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.814841 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:28 old-k8s-version-591842 kubelet[663]: E0315 07:50:28.644332     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.815023 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:29 old-k8s-version-591842 kubelet[663]: E0315 07:50:29.645326     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.816134 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:42 old-k8s-version-591842 kubelet[663]: E0315 07:50:42.644973     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.816475 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:43 old-k8s-version-591842 kubelet[663]: E0315 07:50:43.644572     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.816660 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.644836     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.816996 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.645785     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.817320 3490722 logs.go:138] Found kubelet problem: Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.645648     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.817501 3490722 logs.go:138] Found kubelet problem: Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.651347     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.817684 3490722 logs.go:138] Found kubelet problem: Mar 15 07:51:19 old-k8s-version-591842 kubelet[663]: E0315 07:51:19.648408     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.818007 3490722 logs.go:138] Found kubelet problem: Mar 15 07:51:21 old-k8s-version-591842 kubelet[663]: E0315 07:51:21.644468     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	I0315 07:51:31.818018 3490722 logs.go:123] Gathering logs for etcd [3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe] ...
	I0315 07:51:31.818032 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe"
	I0315 07:51:31.903966 3490722 logs.go:123] Gathering logs for etcd [288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf] ...
	I0315 07:51:31.903997 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf"
	I0315 07:51:31.986316 3490722 out.go:304] Setting ErrFile to fd 2...
	I0315 07:51:31.986344 3490722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0315 07:51:31.986390 3490722 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0315 07:51:31.986404 3490722 out.go:239]   Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.645785     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	  Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.645785     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.986414 3490722 out.go:239]   Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.645648     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	  Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.645648     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.986428 3490722 out.go:239]   Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.651347     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.651347     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.986435 3490722 out.go:239]   Mar 15 07:51:19 old-k8s-version-591842 kubelet[663]: E0315 07:51:19.648408     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 15 07:51:19 old-k8s-version-591842 kubelet[663]: E0315 07:51:19.648408     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.986449 3490722 out.go:239]   Mar 15 07:51:21 old-k8s-version-591842 kubelet[663]: E0315 07:51:21.644468     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	  Mar 15 07:51:21 old-k8s-version-591842 kubelet[663]: E0315 07:51:21.644468     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	I0315 07:51:31.986457 3490722 out.go:304] Setting ErrFile to fd 2...
	I0315 07:51:31.986464 3490722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:51:41.987133 3490722 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0315 07:51:41.998532 3490722 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0315 07:51:42.008114 3490722 out.go:177] 
	W0315 07:51:42.013913 3490722 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0315 07:51:42.013959 3490722 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0315 07:51:42.013981 3490722 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0315 07:51:42.013988 3490722 out.go:239] * 
	* 
	W0315 07:51:42.015597 3490722 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 07:51:42.018503 3490722 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-591842 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-591842
helpers_test.go:235: (dbg) docker inspect old-k8s-version-591842:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18fa161c9b305c812c8cb0139978d2076a2cbfd7c3b54015de2a346b49e84efc",
	        "Created": "2024-03-15T07:42:21.029029564Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3490914,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-15T07:45:32.360092308Z",
	            "FinishedAt": "2024-03-15T07:45:31.249186001Z"
	        },
	        "Image": "sha256:db62270b4bb0cfcde696782f7a6322baca275275e31814ce9fd8998407bf461e",
	        "ResolvConfPath": "/var/lib/docker/containers/18fa161c9b305c812c8cb0139978d2076a2cbfd7c3b54015de2a346b49e84efc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18fa161c9b305c812c8cb0139978d2076a2cbfd7c3b54015de2a346b49e84efc/hostname",
	        "HostsPath": "/var/lib/docker/containers/18fa161c9b305c812c8cb0139978d2076a2cbfd7c3b54015de2a346b49e84efc/hosts",
	        "LogPath": "/var/lib/docker/containers/18fa161c9b305c812c8cb0139978d2076a2cbfd7c3b54015de2a346b49e84efc/18fa161c9b305c812c8cb0139978d2076a2cbfd7c3b54015de2a346b49e84efc-json.log",
	        "Name": "/old-k8s-version-591842",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-591842:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-591842",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5cf4b5005a795b67c812375de157beba0f565cab11117ac227fe78989c7c1632-init/diff:/var/lib/docker/overlay2/81bfb75b66991fc99a81a39de84c7e82ece5b807050cd14d22a1050d39339cc7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5cf4b5005a795b67c812375de157beba0f565cab11117ac227fe78989c7c1632/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5cf4b5005a795b67c812375de157beba0f565cab11117ac227fe78989c7c1632/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5cf4b5005a795b67c812375de157beba0f565cab11117ac227fe78989c7c1632/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-591842",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-591842/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-591842",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-591842",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-591842",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a9f8dcd7360caf540d14a17c7fe00e3cd7f8668d7fcccc210f81fe4aa09765d",
	            "SandboxKey": "/var/run/docker/netns/6a9f8dcd7360",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36975"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36974"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36971"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36973"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36972"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-591842": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "18fa161c9b30",
	                        "old-k8s-version-591842"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "3a5282fbcdd526425fdd031a05b57d603c6c505a01ddafd1bc89f63881ca5da8",
	                    "EndpointID": "508a6c0bafdc514c09d7a945d113f5288672f9980150a624c872249c913dfab3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-591842",
	                        "18fa161c9b30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-591842 -n old-k8s-version-591842
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-591842 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-591842 logs -n 25: (2.069631704s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-764294                              | cert-expiration-764294       | jenkins | v1.32.0 | 15 Mar 24 07:41 UTC | 15 Mar 24 07:41 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | force-systemd-env-770281                               | force-systemd-env-770281     | jenkins | v1.32.0 | 15 Mar 24 07:41 UTC | 15 Mar 24 07:41 UTC |
	|         | ssh cat                                                |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-770281                            | force-systemd-env-770281     | jenkins | v1.32.0 | 15 Mar 24 07:41 UTC | 15 Mar 24 07:41 UTC |
	| start   | -p cert-options-342304                                 | cert-options-342304          | jenkins | v1.32.0 | 15 Mar 24 07:41 UTC | 15 Mar 24 07:42 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                              |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                              |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                              |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                              |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | cert-options-342304 ssh                                | cert-options-342304          | jenkins | v1.32.0 | 15 Mar 24 07:42 UTC | 15 Mar 24 07:42 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-342304 -- sudo                         | cert-options-342304          | jenkins | v1.32.0 | 15 Mar 24 07:42 UTC | 15 Mar 24 07:42 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-342304                                 | cert-options-342304          | jenkins | v1.32.0 | 15 Mar 24 07:42 UTC | 15 Mar 24 07:42 UTC |
	| start   | -p old-k8s-version-591842                              | old-k8s-version-591842       | jenkins | v1.32.0 | 15 Mar 24 07:42 UTC | 15 Mar 24 07:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-764294                              | cert-expiration-764294       | jenkins | v1.32.0 | 15 Mar 24 07:44 UTC | 15 Mar 24 07:44 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-764294                              | cert-expiration-764294       | jenkins | v1.32.0 | 15 Mar 24 07:44 UTC | 15 Mar 24 07:44 UTC |
	| start   | -p                                                     | default-k8s-diff-port-484299 | jenkins | v1.32.0 | 15 Mar 24 07:44 UTC | 15 Mar 24 07:45 UTC |
	|         | default-k8s-diff-port-484299                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-591842        | old-k8s-version-591842       | jenkins | v1.32.0 | 15 Mar 24 07:45 UTC | 15 Mar 24 07:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-591842                              | old-k8s-version-591842       | jenkins | v1.32.0 | 15 Mar 24 07:45 UTC | 15 Mar 24 07:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-591842             | old-k8s-version-591842       | jenkins | v1.32.0 | 15 Mar 24 07:45 UTC | 15 Mar 24 07:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-591842                              | old-k8s-version-591842       | jenkins | v1.32.0 | 15 Mar 24 07:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-484299  | default-k8s-diff-port-484299 | jenkins | v1.32.0 | 15 Mar 24 07:46 UTC | 15 Mar 24 07:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-484299 | jenkins | v1.32.0 | 15 Mar 24 07:46 UTC | 15 Mar 24 07:46 UTC |
	|         | default-k8s-diff-port-484299                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-484299       | default-k8s-diff-port-484299 | jenkins | v1.32.0 | 15 Mar 24 07:46 UTC | 15 Mar 24 07:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-484299 | jenkins | v1.32.0 | 15 Mar 24 07:46 UTC | 15 Mar 24 07:50 UTC |
	|         | default-k8s-diff-port-484299                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-484299                           | default-k8s-diff-port-484299 | jenkins | v1.32.0 | 15 Mar 24 07:51 UTC | 15 Mar 24 07:51 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-484299 | jenkins | v1.32.0 | 15 Mar 24 07:51 UTC | 15 Mar 24 07:51 UTC |
	|         | default-k8s-diff-port-484299                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-484299 | jenkins | v1.32.0 | 15 Mar 24 07:51 UTC | 15 Mar 24 07:51 UTC |
	|         | default-k8s-diff-port-484299                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-484299 | jenkins | v1.32.0 | 15 Mar 24 07:51 UTC | 15 Mar 24 07:51 UTC |
	|         | default-k8s-diff-port-484299                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-484299 | jenkins | v1.32.0 | 15 Mar 24 07:51 UTC | 15 Mar 24 07:51 UTC |
	|         | default-k8s-diff-port-484299                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-722347                                  | embed-certs-722347           | jenkins | v1.32.0 | 15 Mar 24 07:51 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 07:51:06
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 07:51:06.964246 3500589 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:51:06.966359 3500589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:51:06.966374 3500589 out.go:304] Setting ErrFile to fd 2...
	I0315 07:51:06.966381 3500589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:51:06.966809 3500589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
	I0315 07:51:06.967706 3500589 out.go:298] Setting JSON to false
	I0315 07:51:06.969365 3500589 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":59611,"bootTime":1710429456,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0315 07:51:06.969467 3500589 start.go:139] virtualization:  
	I0315 07:51:06.972760 3500589 out.go:177] * [embed-certs-722347] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0315 07:51:06.975838 3500589 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:51:06.977590 3500589 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:51:06.976000 3500589 notify.go:220] Checking for updates...
	I0315 07:51:06.979643 3500589 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	I0315 07:51:06.981630 3500589 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	I0315 07:51:06.983794 3500589 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0315 07:51:06.985642 3500589 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:51:06.988451 3500589 config.go:182] Loaded profile config "old-k8s-version-591842": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0315 07:51:06.988549 3500589 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:51:07.018396 3500589 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0315 07:51:07.018519 3500589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:51:07.096934 3500589 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-15 07:51:07.086932252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:51:07.097046 3500589 docker.go:295] overlay module found
	I0315 07:51:07.099243 3500589 out.go:177] * Using the docker driver based on user configuration
	I0315 07:51:07.101230 3500589 start.go:297] selected driver: docker
	I0315 07:51:07.101251 3500589 start.go:901] validating driver "docker" against <nil>
	I0315 07:51:07.101267 3500589 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:51:07.101926 3500589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:51:07.156304 3500589 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-15 07:51:07.146478273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:51:07.156458 3500589 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 07:51:07.156691 3500589 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 07:51:07.158654 3500589 out.go:177] * Using Docker driver with root privileges
	I0315 07:51:07.160820 3500589 cni.go:84] Creating CNI manager for ""
	I0315 07:51:07.160843 3500589 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0315 07:51:07.160854 3500589 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0315 07:51:07.160933 3500589 start.go:340] cluster config:
	{Name:embed-certs-722347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-722347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:51:07.163064 3500589 out.go:177] * Starting "embed-certs-722347" primary control-plane node in "embed-certs-722347" cluster
	I0315 07:51:07.165285 3500589 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0315 07:51:07.168261 3500589 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0315 07:51:07.170411 3500589 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0315 07:51:07.170466 3500589 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0315 07:51:07.170472 3500589 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0315 07:51:07.170479 3500589 cache.go:56] Caching tarball of preloaded images
	I0315 07:51:07.170703 3500589 preload.go:173] Found /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0315 07:51:07.170717 3500589 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0315 07:51:07.170815 3500589 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/config.json ...
	I0315 07:51:07.170844 3500589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/config.json: {Name:mk8e63ea88fd916c4c3f98fbf1530b6329625bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:51:07.188577 3500589 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0315 07:51:07.188600 3500589 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0315 07:51:07.188624 3500589 cache.go:194] Successfully downloaded all kic artifacts
	I0315 07:51:07.188652 3500589 start.go:360] acquireMachinesLock for embed-certs-722347: {Name:mkbbec9ff3261837b604deb6de7373b706f2025b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 07:51:07.189255 3500589 start.go:364] duration metric: took 582.854µs to acquireMachinesLock for "embed-certs-722347"
	I0315 07:51:07.189307 3500589 start.go:93] Provisioning new machine with config: &{Name:embed-certs-722347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-722347 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0315 07:51:07.189397 3500589 start.go:125] createHost starting for "" (driver="docker")
	I0315 07:51:07.686339 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:51:10.178061 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:51:07.193157 3500589 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0315 07:51:07.193404 3500589 start.go:159] libmachine.API.Create for "embed-certs-722347" (driver="docker")
	I0315 07:51:07.193439 3500589 client.go:168] LocalClient.Create starting
	I0315 07:51:07.193510 3500589 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem
	I0315 07:51:07.193547 3500589 main.go:141] libmachine: Decoding PEM data...
	I0315 07:51:07.193564 3500589 main.go:141] libmachine: Parsing certificate...
	I0315 07:51:07.193637 3500589 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/cert.pem
	I0315 07:51:07.193661 3500589 main.go:141] libmachine: Decoding PEM data...
	I0315 07:51:07.193678 3500589 main.go:141] libmachine: Parsing certificate...
	I0315 07:51:07.194032 3500589 cli_runner.go:164] Run: docker network inspect embed-certs-722347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0315 07:51:07.208543 3500589 cli_runner.go:211] docker network inspect embed-certs-722347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0315 07:51:07.208630 3500589 network_create.go:281] running [docker network inspect embed-certs-722347] to gather additional debugging logs...
	I0315 07:51:07.208649 3500589 cli_runner.go:164] Run: docker network inspect embed-certs-722347
	W0315 07:51:07.224176 3500589 cli_runner.go:211] docker network inspect embed-certs-722347 returned with exit code 1
	I0315 07:51:07.224208 3500589 network_create.go:284] error running [docker network inspect embed-certs-722347]: docker network inspect embed-certs-722347: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-722347 not found
	I0315 07:51:07.224220 3500589 network_create.go:286] output of [docker network inspect embed-certs-722347]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-722347 not found
	
	** /stderr **
	I0315 07:51:07.224319 3500589 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0315 07:51:07.239549 3500589 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2a726f180238 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:7e:c5:12:0f} reservation:<nil>}
	I0315 07:51:07.240045 3500589 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b9c6f1496deb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:7b:41:1e:22} reservation:<nil>}
	I0315 07:51:07.240449 3500589 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1634a6e12e83 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:43:0d:36:ef} reservation:<nil>}
	I0315 07:51:07.240784 3500589 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3a5282fbcdd5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:8a:49:e4:6c} reservation:<nil>}
	I0315 07:51:07.241283 3500589 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002624f80}
	I0315 07:51:07.241330 3500589 network_create.go:124] attempt to create docker network embed-certs-722347 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0315 07:51:07.241386 3500589 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-722347 embed-certs-722347
	I0315 07:51:07.316575 3500589 network_create.go:108] docker network embed-certs-722347 192.168.85.0/24 created
	I0315 07:51:07.316607 3500589 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-722347" container
	I0315 07:51:07.316675 3500589 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0315 07:51:07.332547 3500589 cli_runner.go:164] Run: docker volume create embed-certs-722347 --label name.minikube.sigs.k8s.io=embed-certs-722347 --label created_by.minikube.sigs.k8s.io=true
	I0315 07:51:07.348410 3500589 oci.go:103] Successfully created a docker volume embed-certs-722347
	I0315 07:51:07.348490 3500589 cli_runner.go:164] Run: docker run --rm --name embed-certs-722347-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-722347 --entrypoint /usr/bin/test -v embed-certs-722347:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0315 07:51:08.025178 3500589 oci.go:107] Successfully prepared a docker volume embed-certs-722347
	I0315 07:51:08.025221 3500589 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0315 07:51:08.025241 3500589 kic.go:194] Starting extracting preloaded images to volume ...
	I0315 07:51:08.025331 3500589 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-722347:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0315 07:51:12.677423 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:51:14.677510 3490722 pod_ready.go:102] pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace has status "Ready":"False"
	I0315 07:51:16.677351 3490722 pod_ready.go:81] duration metric: took 4m0.006438222s for pod "metrics-server-9975d5f86-9j72g" in "kube-system" namespace to be "Ready" ...
	E0315 07:51:16.677377 3490722 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0315 07:51:16.677387 3490722 pod_ready.go:38] duration metric: took 5m20.352426308s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 07:51:16.677401 3490722 api_server.go:52] waiting for apiserver process to appear ...
	I0315 07:51:16.677430 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:51:16.677492 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:51:16.741347 3490722 cri.go:89] found id: "3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465"
	I0315 07:51:16.741365 3490722 cri.go:89] found id: "aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5"
	I0315 07:51:16.741370 3490722 cri.go:89] found id: ""
	I0315 07:51:16.741377 3490722 logs.go:276] 2 containers: [3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465 aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5]
	I0315 07:51:16.741429 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.745174 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.748785 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0315 07:51:16.748846 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:51:13.071627 3500589 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-722347:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir: (5.046257196s)
	I0315 07:51:13.071659 3500589 kic.go:203] duration metric: took 5.046414304s to extract preloaded images to volume ...
	W0315 07:51:13.071823 3500589 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0315 07:51:13.071934 3500589 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0315 07:51:13.126832 3500589 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-722347 --name embed-certs-722347 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-722347 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-722347 --network embed-certs-722347 --ip 192.168.85.2 --volume embed-certs-722347:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f
	I0315 07:51:13.492780 3500589 cli_runner.go:164] Run: docker container inspect embed-certs-722347 --format={{.State.Running}}
	I0315 07:51:13.517469 3500589 cli_runner.go:164] Run: docker container inspect embed-certs-722347 --format={{.State.Status}}
	I0315 07:51:13.537707 3500589 cli_runner.go:164] Run: docker exec embed-certs-722347 stat /var/lib/dpkg/alternatives/iptables
	I0315 07:51:13.626528 3500589 oci.go:144] the created container "embed-certs-722347" has a running status.
	I0315 07:51:13.626554 3500589 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18213-3295134/.minikube/machines/embed-certs-722347/id_rsa...
	I0315 07:51:14.312982 3500589 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18213-3295134/.minikube/machines/embed-certs-722347/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0315 07:51:14.344591 3500589 cli_runner.go:164] Run: docker container inspect embed-certs-722347 --format={{.State.Status}}
	I0315 07:51:14.368961 3500589 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0315 07:51:14.368984 3500589 kic_runner.go:114] Args: [docker exec --privileged embed-certs-722347 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0315 07:51:14.444500 3500589 cli_runner.go:164] Run: docker container inspect embed-certs-722347 --format={{.State.Status}}
	I0315 07:51:14.469400 3500589 machine.go:94] provisionDockerMachine start ...
	I0315 07:51:14.469497 3500589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-722347
	I0315 07:51:14.499265 3500589 main.go:141] libmachine: Using SSH client type: native
	I0315 07:51:14.499547 3500589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36985 <nil> <nil>}
	I0315 07:51:14.499558 3500589 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 07:51:14.651562 3500589 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-722347
	
	I0315 07:51:14.651586 3500589 ubuntu.go:169] provisioning hostname "embed-certs-722347"
	I0315 07:51:14.651653 3500589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-722347
	I0315 07:51:14.683504 3500589 main.go:141] libmachine: Using SSH client type: native
	I0315 07:51:14.683752 3500589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36985 <nil> <nil>}
	I0315 07:51:14.683770 3500589 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-722347 && echo "embed-certs-722347" | sudo tee /etc/hostname
	I0315 07:51:14.845063 3500589 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-722347
	
	I0315 07:51:14.845199 3500589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-722347
	I0315 07:51:14.864555 3500589 main.go:141] libmachine: Using SSH client type: native
	I0315 07:51:14.864788 3500589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 36985 <nil> <nil>}
	I0315 07:51:14.864805 3500589 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-722347' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-722347/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-722347' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 07:51:15.034245 3500589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 07:51:15.034315 3500589 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18213-3295134/.minikube CaCertPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18213-3295134/.minikube}
	I0315 07:51:15.034354 3500589 ubuntu.go:177] setting up certificates
	I0315 07:51:15.034377 3500589 provision.go:84] configureAuth start
	I0315 07:51:15.034460 3500589 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-722347
	I0315 07:51:15.059134 3500589 provision.go:143] copyHostCerts
	I0315 07:51:15.059215 3500589 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.pem, removing ...
	I0315 07:51:15.059233 3500589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.pem
	I0315 07:51:15.059318 3500589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.pem (1078 bytes)
	I0315 07:51:15.059438 3500589 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-3295134/.minikube/cert.pem, removing ...
	I0315 07:51:15.059449 3500589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-3295134/.minikube/cert.pem
	I0315 07:51:15.059485 3500589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18213-3295134/.minikube/cert.pem (1123 bytes)
	I0315 07:51:15.059544 3500589 exec_runner.go:144] found /home/jenkins/minikube-integration/18213-3295134/.minikube/key.pem, removing ...
	I0315 07:51:15.059556 3500589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18213-3295134/.minikube/key.pem
	I0315 07:51:15.059583 3500589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18213-3295134/.minikube/key.pem (1679 bytes)
	I0315 07:51:15.059634 3500589 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca-key.pem org=jenkins.embed-certs-722347 san=[127.0.0.1 192.168.85.2 embed-certs-722347 localhost minikube]
	I0315 07:51:15.549067 3500589 provision.go:177] copyRemoteCerts
	I0315 07:51:15.549135 3500589 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 07:51:15.549199 3500589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-722347
	I0315 07:51:15.565719 3500589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36985 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/embed-certs-722347/id_rsa Username:docker}
	I0315 07:51:15.668354 3500589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0315 07:51:15.699272 3500589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0315 07:51:15.723844 3500589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 07:51:15.748402 3500589 provision.go:87] duration metric: took 713.997854ms to configureAuth
	I0315 07:51:15.748427 3500589 ubuntu.go:193] setting minikube options for container-runtime
	I0315 07:51:15.748604 3500589 config.go:182] Loaded profile config "embed-certs-722347": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0315 07:51:15.748611 3500589 machine.go:97] duration metric: took 1.279194566s to provisionDockerMachine
	I0315 07:51:15.748618 3500589 client.go:171] duration metric: took 8.555168333s to LocalClient.Create
	I0315 07:51:15.748631 3500589 start.go:167] duration metric: took 8.555227343s to libmachine.API.Create "embed-certs-722347"
	I0315 07:51:15.748638 3500589 start.go:293] postStartSetup for "embed-certs-722347" (driver="docker")
	I0315 07:51:15.748647 3500589 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 07:51:15.748701 3500589 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 07:51:15.748750 3500589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-722347
	I0315 07:51:15.765596 3500589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36985 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/embed-certs-722347/id_rsa Username:docker}
	I0315 07:51:15.872379 3500589 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 07:51:15.875630 3500589 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0315 07:51:15.875716 3500589 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0315 07:51:15.875739 3500589 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0315 07:51:15.875747 3500589 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0315 07:51:15.875804 3500589 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-3295134/.minikube/addons for local assets ...
	I0315 07:51:15.875864 3500589 filesync.go:126] Scanning /home/jenkins/minikube-integration/18213-3295134/.minikube/files for local assets ...
	I0315 07:51:15.875952 3500589 filesync.go:149] local asset: /home/jenkins/minikube-integration/18213-3295134/.minikube/files/etc/ssl/certs/33005502.pem -> 33005502.pem in /etc/ssl/certs
	I0315 07:51:15.876063 3500589 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 07:51:15.884721 3500589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/files/etc/ssl/certs/33005502.pem --> /etc/ssl/certs/33005502.pem (1708 bytes)
	I0315 07:51:15.909572 3500589 start.go:296] duration metric: took 160.919888ms for postStartSetup
	I0315 07:51:15.909996 3500589 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-722347
	I0315 07:51:15.932628 3500589 profile.go:142] Saving config to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/config.json ...
	I0315 07:51:15.932913 3500589 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 07:51:15.933005 3500589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-722347
	I0315 07:51:15.948964 3500589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36985 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/embed-certs-722347/id_rsa Username:docker}
	I0315 07:51:16.044433 3500589 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0315 07:51:16.049248 3500589 start.go:128] duration metric: took 8.859834932s to createHost
	I0315 07:51:16.049271 3500589 start.go:83] releasing machines lock for "embed-certs-722347", held for 8.860000172s
	I0315 07:51:16.049370 3500589 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-722347
	I0315 07:51:16.067927 3500589 ssh_runner.go:195] Run: cat /version.json
	I0315 07:51:16.067987 3500589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-722347
	I0315 07:51:16.068048 3500589 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 07:51:16.068116 3500589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-722347
	I0315 07:51:16.092329 3500589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36985 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/embed-certs-722347/id_rsa Username:docker}
	I0315 07:51:16.104616 3500589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36985 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/embed-certs-722347/id_rsa Username:docker}
	I0315 07:51:16.195662 3500589 ssh_runner.go:195] Run: systemctl --version
	I0315 07:51:16.324627 3500589 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0315 07:51:16.329572 3500589 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0315 07:51:16.357026 3500589 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0315 07:51:16.357117 3500589 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 07:51:16.388919 3500589 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0315 07:51:16.388942 3500589 start.go:494] detecting cgroup driver to use...
	I0315 07:51:16.388976 3500589 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0315 07:51:16.389025 3500589 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0315 07:51:16.401311 3500589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0315 07:51:16.412839 3500589 docker.go:217] disabling cri-docker service (if available) ...
	I0315 07:51:16.412904 3500589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 07:51:16.427040 3500589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 07:51:16.442013 3500589 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 07:51:16.526842 3500589 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 07:51:16.623457 3500589 docker.go:233] disabling docker service ...
	I0315 07:51:16.623576 3500589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 07:51:16.646017 3500589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 07:51:16.658104 3500589 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 07:51:16.781907 3500589 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 07:51:16.882692 3500589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 07:51:16.897241 3500589 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 07:51:16.919537 3500589 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0315 07:51:16.932044 3500589 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0315 07:51:16.945965 3500589 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0315 07:51:16.946099 3500589 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0315 07:51:16.959418 3500589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0315 07:51:16.973939 3500589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0315 07:51:16.992629 3500589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0315 07:51:17.004184 3500589 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 07:51:17.024960 3500589 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0315 07:51:17.035748 3500589 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 07:51:17.051823 3500589 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 07:51:17.065911 3500589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:51:17.202561 3500589 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0315 07:51:17.400846 3500589 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0315 07:51:17.400957 3500589 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0315 07:51:17.409572 3500589 start.go:562] Will wait 60s for crictl version
	I0315 07:51:17.409712 3500589 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.413440 3500589 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 07:51:17.473401 3500589 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0315 07:51:17.473492 3500589 ssh_runner.go:195] Run: containerd --version
	I0315 07:51:17.496872 3500589 ssh_runner.go:195] Run: containerd --version
	I0315 07:51:17.527242 3500589 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
	I0315 07:51:17.529470 3500589 cli_runner.go:164] Run: docker network inspect embed-certs-722347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0315 07:51:17.547823 3500589 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0315 07:51:17.551847 3500589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:51:17.564343 3500589 kubeadm.go:877] updating cluster {Name:embed-certs-722347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-722347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 07:51:17.564458 3500589 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0315 07:51:17.564516 3500589 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:51:17.613791 3500589 containerd.go:612] all images are preloaded for containerd runtime.
	I0315 07:51:17.613812 3500589 containerd.go:519] Images already preloaded, skipping extraction
	I0315 07:51:17.613881 3500589 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 07:51:17.675288 3500589 containerd.go:612] all images are preloaded for containerd runtime.
	I0315 07:51:17.675316 3500589 cache_images.go:84] Images are preloaded, skipping loading
	I0315 07:51:17.675338 3500589 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.28.4 containerd true true} ...
	I0315 07:51:17.675437 3500589 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-722347 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-722347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 07:51:17.675500 3500589 ssh_runner.go:195] Run: sudo crictl info
	I0315 07:51:17.742197 3500589 cni.go:84] Creating CNI manager for ""
	I0315 07:51:17.742217 3500589 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0315 07:51:17.742226 3500589 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 07:51:17.742251 3500589 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-722347 NodeName:embed-certs-722347 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 07:51:17.742391 3500589 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-722347"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 07:51:17.742454 3500589 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 07:51:17.752517 3500589 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 07:51:17.752583 3500589 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 07:51:17.763155 3500589 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0315 07:51:17.788503 3500589 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 07:51:17.820532 3500589 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0315 07:51:17.843460 3500589 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0315 07:51:17.847660 3500589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 07:51:17.859015 3500589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 07:51:17.991630 3500589 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 07:51:18.010954 3500589 certs.go:68] Setting up /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347 for IP: 192.168.85.2
	I0315 07:51:18.010976 3500589 certs.go:194] generating shared ca certs ...
	I0315 07:51:18.010994 3500589 certs.go:226] acquiring lock for ca certs: {Name:mk9abb58e338d3f021292a49b0c7ea22df42932a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:51:18.011226 3500589 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.key
	I0315 07:51:18.011268 3500589 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/proxy-client-ca.key
	I0315 07:51:18.011276 3500589 certs.go:256] generating profile certs ...
	I0315 07:51:18.011336 3500589 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/client.key
	I0315 07:51:18.011348 3500589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/client.crt with IP's: []
	I0315 07:51:18.248120 3500589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/client.crt ...
	I0315 07:51:18.248188 3500589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/client.crt: {Name:mk7afe9d378745aa9b6d80e417cd8b37013d316a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:51:18.249553 3500589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/client.key ...
	I0315 07:51:18.249620 3500589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/client.key: {Name:mkd9d15459a043e8e101b1426ba0a8d406e16880 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:51:18.249818 3500589 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/apiserver.key.6fbee632
	I0315 07:51:18.249861 3500589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/apiserver.crt.6fbee632 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0315 07:51:19.483994 3500589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/apiserver.crt.6fbee632 ...
	I0315 07:51:19.484077 3500589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/apiserver.crt.6fbee632: {Name:mkff8d19ec8ba146511eb0d74344bce365a04de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:51:19.484870 3500589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/apiserver.key.6fbee632 ...
	I0315 07:51:19.484927 3500589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/apiserver.key.6fbee632: {Name:mk75d521a12e8526d566ced7b5778c46290d47f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:51:19.485441 3500589 certs.go:381] copying /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/apiserver.crt.6fbee632 -> /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/apiserver.crt
	I0315 07:51:19.485590 3500589 certs.go:385] copying /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/apiserver.key.6fbee632 -> /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/apiserver.key
	I0315 07:51:19.485699 3500589 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/proxy-client.key
	I0315 07:51:19.485743 3500589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/proxy-client.crt with IP's: []
	I0315 07:51:19.838076 3500589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/proxy-client.crt ...
	I0315 07:51:19.838111 3500589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/proxy-client.crt: {Name:mk16b5ee09b205fa7f0a43fa373a056781ff9234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:51:19.838809 3500589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/proxy-client.key ...
	I0315 07:51:19.838835 3500589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/proxy-client.key: {Name:mk3f05faac83349e1c11e30b10ef096b2f341dd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 07:51:19.840144 3500589 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/3300550.pem (1338 bytes)
	W0315 07:51:19.840199 3500589 certs.go:480] ignoring /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/3300550_empty.pem, impossibly tiny 0 bytes
	I0315 07:51:19.840214 3500589 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca-key.pem (1675 bytes)
	I0315 07:51:19.840240 3500589 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/ca.pem (1078 bytes)
	I0315 07:51:19.840267 3500589 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/cert.pem (1123 bytes)
	I0315 07:51:19.840303 3500589 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/key.pem (1679 bytes)
	I0315 07:51:19.840350 3500589 certs.go:484] found cert: /home/jenkins/minikube-integration/18213-3295134/.minikube/files/etc/ssl/certs/33005502.pem (1708 bytes)
	I0315 07:51:19.840977 3500589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 07:51:19.867853 3500589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0315 07:51:19.895744 3500589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 07:51:19.927264 3500589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 07:51:19.957760 3500589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0315 07:51:19.986080 3500589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 07:51:20.018132 3500589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 07:51:20.048497 3500589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/embed-certs-722347/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 07:51:20.076603 3500589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 07:51:20.105826 3500589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/certs/3300550.pem --> /usr/share/ca-certificates/3300550.pem (1338 bytes)
	I0315 07:51:20.137217 3500589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18213-3295134/.minikube/files/etc/ssl/certs/33005502.pem --> /usr/share/ca-certificates/33005502.pem (1708 bytes)
	I0315 07:51:20.162433 3500589 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 07:51:20.183501 3500589 ssh_runner.go:195] Run: openssl version
	I0315 07:51:20.189913 3500589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 07:51:20.200753 3500589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:51:20.204637 3500589 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 07:01 /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:51:20.204747 3500589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 07:51:20.212221 3500589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 07:51:20.222138 3500589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3300550.pem && ln -fs /usr/share/ca-certificates/3300550.pem /etc/ssl/certs/3300550.pem"
	I0315 07:51:20.232102 3500589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3300550.pem
	I0315 07:51:20.235712 3500589 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 07:07 /usr/share/ca-certificates/3300550.pem
	I0315 07:51:20.235786 3500589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3300550.pem
	I0315 07:51:20.242713 3500589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3300550.pem /etc/ssl/certs/51391683.0"
	I0315 07:51:20.261609 3500589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33005502.pem && ln -fs /usr/share/ca-certificates/33005502.pem /etc/ssl/certs/33005502.pem"
	I0315 07:51:20.270935 3500589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33005502.pem
	I0315 07:51:20.274402 3500589 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 07:07 /usr/share/ca-certificates/33005502.pem
	I0315 07:51:20.274466 3500589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33005502.pem
	I0315 07:51:20.281512 3500589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33005502.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 07:51:20.291207 3500589 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 07:51:20.294451 3500589 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 07:51:20.294553 3500589 kubeadm.go:391] StartCluster: {Name:embed-certs-722347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-722347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:51:20.294643 3500589 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0315 07:51:20.294706 3500589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 07:51:20.344310 3500589 cri.go:89] found id: ""
	I0315 07:51:20.344414 3500589 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 07:51:20.353344 3500589 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 07:51:20.362464 3500589 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0315 07:51:20.362532 3500589 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 07:51:20.372499 3500589 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 07:51:20.372519 3500589 kubeadm.go:156] found existing configuration files:
	
	I0315 07:51:20.372582 3500589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 07:51:20.382458 3500589 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 07:51:20.382530 3500589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 07:51:20.391163 3500589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 07:51:20.401563 3500589 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 07:51:20.401656 3500589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 07:51:20.409967 3500589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 07:51:20.419026 3500589 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 07:51:20.419116 3500589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 07:51:20.427695 3500589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 07:51:20.436853 3500589 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 07:51:20.436919 3500589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 07:51:20.445560 3500589 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0315 07:51:20.499455 3500589 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 07:51:20.499572 3500589 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 07:51:20.559228 3500589 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0315 07:51:20.559335 3500589 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0315 07:51:20.559388 3500589 kubeadm.go:309] OS: Linux
	I0315 07:51:20.559463 3500589 kubeadm.go:309] CGROUPS_CPU: enabled
	I0315 07:51:20.559533 3500589 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0315 07:51:20.559598 3500589 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0315 07:51:20.559656 3500589 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0315 07:51:20.559774 3500589 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0315 07:51:20.559882 3500589 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0315 07:51:20.559962 3500589 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0315 07:51:20.560050 3500589 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0315 07:51:20.560145 3500589 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0315 07:51:20.635853 3500589 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 07:51:20.636023 3500589 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 07:51:20.636163 3500589 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 07:51:20.873186 3500589 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 07:51:16.809837 3490722 cri.go:89] found id: "3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe"
	I0315 07:51:16.809901 3490722 cri.go:89] found id: "288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf"
	I0315 07:51:16.809917 3490722 cri.go:89] found id: ""
	I0315 07:51:16.809925 3490722 logs.go:276] 2 containers: [3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe 288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf]
	I0315 07:51:16.809988 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.821230 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.832434 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0315 07:51:16.832513 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:51:16.893799 3490722 cri.go:89] found id: "9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e"
	I0315 07:51:16.893823 3490722 cri.go:89] found id: "7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc"
	I0315 07:51:16.893828 3490722 cri.go:89] found id: ""
	I0315 07:51:16.893836 3490722 logs.go:276] 2 containers: [9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e 7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc]
	I0315 07:51:16.893889 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.898028 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.902368 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:51:16.902437 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:51:16.958613 3490722 cri.go:89] found id: "19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423"
	I0315 07:51:16.958636 3490722 cri.go:89] found id: "eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d"
	I0315 07:51:16.958642 3490722 cri.go:89] found id: ""
	I0315 07:51:16.958649 3490722 logs.go:276] 2 containers: [19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423 eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d]
	I0315 07:51:16.958703 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.962711 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:16.966605 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:51:16.966681 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:51:17.016946 3490722 cri.go:89] found id: "24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3"
	I0315 07:51:17.016970 3490722 cri.go:89] found id: "1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439"
	I0315 07:51:17.016976 3490722 cri.go:89] found id: ""
	I0315 07:51:17.016985 3490722 logs.go:276] 2 containers: [24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3 1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439]
	I0315 07:51:17.017041 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.020761 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.027026 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:51:17.027194 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:51:17.081451 3490722 cri.go:89] found id: "39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2"
	I0315 07:51:17.081475 3490722 cri.go:89] found id: "6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d"
	I0315 07:51:17.081480 3490722 cri.go:89] found id: ""
	I0315 07:51:17.081487 3490722 logs.go:276] 2 containers: [39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2 6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d]
	I0315 07:51:17.081544 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.085402 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.088913 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0315 07:51:17.088999 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:51:17.152035 3490722 cri.go:89] found id: "d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842"
	I0315 07:51:17.152107 3490722 cri.go:89] found id: "7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e"
	I0315 07:51:17.152125 3490722 cri.go:89] found id: ""
	I0315 07:51:17.152146 3490722 logs.go:276] 2 containers: [d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842 7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e]
	I0315 07:51:17.152229 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.156132 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.159619 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:51:17.159727 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:51:17.208743 3490722 cri.go:89] found id: "141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c"
	I0315 07:51:17.208812 3490722 cri.go:89] found id: ""
	I0315 07:51:17.208834 3490722 logs.go:276] 1 containers: [141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c]
	I0315 07:51:17.208916 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.212643 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:51:17.212756 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:51:17.276269 3490722 cri.go:89] found id: "ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803"
	I0315 07:51:17.276352 3490722 cri.go:89] found id: "4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2"
	I0315 07:51:17.276372 3490722 cri.go:89] found id: ""
	I0315 07:51:17.276394 3490722 logs.go:276] 2 containers: [ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803 4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2]
	I0315 07:51:17.276485 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.281157 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:17.285880 3490722 logs.go:123] Gathering logs for kube-apiserver [aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5] ...
	I0315 07:51:17.285975 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5"
	I0315 07:51:17.383966 3490722 logs.go:123] Gathering logs for etcd [3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe] ...
	I0315 07:51:17.383996 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe"
	I0315 07:51:17.452945 3490722 logs.go:123] Gathering logs for etcd [288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf] ...
	I0315 07:51:17.453085 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf"
	I0315 07:51:17.518825 3490722 logs.go:123] Gathering logs for kube-proxy [1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439] ...
	I0315 07:51:17.518972 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439"
	I0315 07:51:17.582975 3490722 logs.go:123] Gathering logs for kube-controller-manager [6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d] ...
	I0315 07:51:17.582999 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d"
	I0315 07:51:17.721400 3490722 logs.go:123] Gathering logs for kindnet [d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842] ...
	I0315 07:51:17.721438 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842"
	I0315 07:51:17.782289 3490722 logs.go:123] Gathering logs for kubelet ...
	I0315 07:51:17.782317 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0315 07:51:17.844998 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151553     663 reflector.go:138] object-"kube-system"/"metrics-server-token-h2gnq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-h2gnq" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.845272 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151674     663 reflector.go:138] object-"default"/"default-token-82cb9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-82cb9" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.845526 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151731     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.845762 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151795     663 reflector.go:138] object-"kube-system"/"coredns-token-vw5pl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-vw5pl" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.845999 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151873     663 reflector.go:138] object-"kube-system"/"kindnet-token-jrrqr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jrrqr" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.846238 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151952     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-vqzhx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-vqzhx" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.846467 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.152014     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.846715 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.152079     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-hnqzn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-hnqzn" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:17.857570 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:58 old-k8s-version-591842 kubelet[663]: E0315 07:45:58.854597     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:17.857857 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:59 old-k8s-version-591842 kubelet[663]: E0315 07:45:59.802434     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.861219 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:14 old-k8s-version-591842 kubelet[663]: E0315 07:46:14.653468     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:17.863383 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:20 old-k8s-version-591842 kubelet[663]: E0315 07:46:20.888372     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.863745 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:21 old-k8s-version-591842 kubelet[663]: E0315 07:46:21.893270     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.864106 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:22 old-k8s-version-591842 kubelet[663]: E0315 07:46:22.992818     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.864314 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:25 old-k8s-version-591842 kubelet[663]: E0315 07:46:25.647507     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.865123 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:30 old-k8s-version-591842 kubelet[663]: E0315 07:46:30.928822     663 pod_workers.go:191] Error syncing pod 1b02bdb3-5934-4002-980c-769d1de68357 ("storage-provisioner_kube-system(1b02bdb3-5934-4002-980c-769d1de68357)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1b02bdb3-5934-4002-980c-769d1de68357)"
	W0315 07:51:17.865704 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:35 old-k8s-version-591842 kubelet[663]: E0315 07:46:35.946453     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.868835 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:37 old-k8s-version-591842 kubelet[663]: E0315 07:46:37.659626     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:17.869661 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:42 old-k8s-version-591842 kubelet[663]: E0315 07:46:42.993141     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.869869 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:48 old-k8s-version-591842 kubelet[663]: E0315 07:46:48.651057     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.870811 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:58 old-k8s-version-591842 kubelet[663]: E0315 07:46:58.015895     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.871197 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:02 old-k8s-version-591842 kubelet[663]: E0315 07:47:02.992859     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.871475 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:03 old-k8s-version-591842 kubelet[663]: E0315 07:47:03.649062     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.871720 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:15 old-k8s-version-591842 kubelet[663]: E0315 07:47:15.644773     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.872103 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:16 old-k8s-version-591842 kubelet[663]: E0315 07:47:16.645024     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.874664 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:29 old-k8s-version-591842 kubelet[663]: E0315 07:47:29.656112     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:17.875037 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:30 old-k8s-version-591842 kubelet[663]: E0315 07:47:30.644347     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.875270 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:42 old-k8s-version-591842 kubelet[663]: E0315 07:47:42.645173     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.875894 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:45 old-k8s-version-591842 kubelet[663]: E0315 07:47:45.161682     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.876256 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:52 old-k8s-version-591842 kubelet[663]: E0315 07:47:52.992815     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.876463 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:54 old-k8s-version-591842 kubelet[663]: E0315 07:47:54.645803     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.876813 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:06 old-k8s-version-591842 kubelet[663]: E0315 07:48:06.644489     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.877020 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:06 old-k8s-version-591842 kubelet[663]: E0315 07:48:06.645236     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.877369 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:18 old-k8s-version-591842 kubelet[663]: E0315 07:48:18.644500     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.877577 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:18 old-k8s-version-591842 kubelet[663]: E0315 07:48:18.645259     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.877786 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:29 old-k8s-version-591842 kubelet[663]: E0315 07:48:29.644702     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.878133 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:33 old-k8s-version-591842 kubelet[663]: E0315 07:48:33.645925     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.878366 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:42 old-k8s-version-591842 kubelet[663]: E0315 07:48:42.644688     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.878717 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:45 old-k8s-version-591842 kubelet[663]: E0315 07:48:45.644849     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.881377 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:57 old-k8s-version-591842 kubelet[663]: E0315 07:48:57.654804     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:17.881741 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:59 old-k8s-version-591842 kubelet[663]: E0315 07:48:59.644341     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.881947 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:09 old-k8s-version-591842 kubelet[663]: E0315 07:49:09.653348     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.882559 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:14 old-k8s-version-591842 kubelet[663]: E0315 07:49:14.388467     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.882911 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:22 old-k8s-version-591842 kubelet[663]: E0315 07:49:22.993103     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.883162 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:23 old-k8s-version-591842 kubelet[663]: E0315 07:49:23.645937     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.883370 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:35 old-k8s-version-591842 kubelet[663]: E0315 07:49:35.645912     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.883715 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:36 old-k8s-version-591842 kubelet[663]: E0315 07:49:36.644319     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.883929 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:47 old-k8s-version-591842 kubelet[663]: E0315 07:49:47.658879     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.884276 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:50 old-k8s-version-591842 kubelet[663]: E0315 07:49:50.644297     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.884485 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:02 old-k8s-version-591842 kubelet[663]: E0315 07:50:02.644922     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.884840 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:05 old-k8s-version-591842 kubelet[663]: E0315 07:50:05.644906     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.885212 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:17 old-k8s-version-591842 kubelet[663]: E0315 07:50:17.645285     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.885484 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:17 old-k8s-version-591842 kubelet[663]: E0315 07:50:17.647267     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.889275 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:28 old-k8s-version-591842 kubelet[663]: E0315 07:50:28.644332     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.889531 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:29 old-k8s-version-591842 kubelet[663]: E0315 07:50:29.645326     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.889739 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:42 old-k8s-version-591842 kubelet[663]: E0315 07:50:42.644973     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.890100 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:43 old-k8s-version-591842 kubelet[663]: E0315 07:50:43.644572     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.890309 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.644836     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:17.890655 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.645785     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.891006 3490722 logs.go:138] Found kubelet problem: Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.645648     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:17.891245 3490722 logs.go:138] Found kubelet problem: Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.651347     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0315 07:51:17.891273 3490722 logs.go:123] Gathering logs for dmesg ...
	I0315 07:51:17.891305 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:51:17.929063 3490722 logs.go:123] Gathering logs for kindnet [7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e] ...
	I0315 07:51:17.929137 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e"
	I0315 07:51:17.993303 3490722 logs.go:123] Gathering logs for kube-scheduler [eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d] ...
	I0315 07:51:17.993327 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d"
	I0315 07:51:18.145483 3490722 logs.go:123] Gathering logs for container status ...
	I0315 07:51:18.145518 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:51:18.232250 3490722 logs.go:123] Gathering logs for kube-apiserver [3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465] ...
	I0315 07:51:18.232279 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465"
	I0315 07:51:18.307131 3490722 logs.go:123] Gathering logs for coredns [7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc] ...
	I0315 07:51:18.307162 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc"
	I0315 07:51:18.372534 3490722 logs.go:123] Gathering logs for kubernetes-dashboard [141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c] ...
	I0315 07:51:18.372563 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c"
	I0315 07:51:18.423802 3490722 logs.go:123] Gathering logs for coredns [9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e] ...
	I0315 07:51:18.423835 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e"
	I0315 07:51:18.484538 3490722 logs.go:123] Gathering logs for kube-controller-manager [39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2] ...
	I0315 07:51:18.484567 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2"
	I0315 07:51:18.734982 3490722 logs.go:123] Gathering logs for kube-proxy [24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3] ...
	I0315 07:51:18.735022 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3"
	I0315 07:51:18.989000 3490722 logs.go:123] Gathering logs for storage-provisioner [ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803] ...
	I0315 07:51:18.989036 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803"
	I0315 07:51:19.097844 3490722 logs.go:123] Gathering logs for storage-provisioner [4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2] ...
	I0315 07:51:19.097876 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2"
	I0315 07:51:19.152808 3490722 logs.go:123] Gathering logs for containerd ...
	I0315 07:51:19.152845 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0315 07:51:19.226776 3490722 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:51:19.226813 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:51:19.465101 3490722 logs.go:123] Gathering logs for kube-scheduler [19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423] ...
	I0315 07:51:19.465178 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423"
	I0315 07:51:19.520188 3490722 out.go:304] Setting ErrFile to fd 2...
	I0315 07:51:19.520213 3490722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0315 07:51:19.520268 3490722 out.go:239] X Problems detected in kubelet:
	W0315 07:51:19.520277 3490722 out.go:239]   Mar 15 07:50:43 old-k8s-version-591842 kubelet[663]: E0315 07:50:43.644572     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:19.520285 3490722 out.go:239]   Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.644836     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:19.520293 3490722 out.go:239]   Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.645785     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:19.520302 3490722 out.go:239]   Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.645648     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:19.520311 3490722 out.go:239]   Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.651347     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0315 07:51:19.520322 3490722 out.go:304] Setting ErrFile to fd 2...
	I0315 07:51:19.520329 3490722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:51:20.877620 3500589 out.go:204]   - Generating certificates and keys ...
	I0315 07:51:20.877730 3500589 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 07:51:20.877819 3500589 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 07:51:21.260611 3500589 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 07:51:21.550266 3500589 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0315 07:51:22.311695 3500589 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0315 07:51:22.560765 3500589 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0315 07:51:23.248609 3500589 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0315 07:51:23.248946 3500589 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [embed-certs-722347 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0315 07:51:24.037075 3500589 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0315 07:51:24.037433 3500589 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-722347 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0315 07:51:25.143335 3500589 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 07:51:26.413409 3500589 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 07:51:27.100054 3500589 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0315 07:51:27.100355 3500589 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 07:51:27.268640 3500589 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 07:51:27.893237 3500589 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 07:51:28.267254 3500589 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 07:51:28.997024 3500589 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 07:51:28.997796 3500589 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 07:51:29.000556 3500589 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 07:51:29.521502 3490722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:51:29.538525 3490722 api_server.go:72] duration metric: took 5m49.79464477s to wait for apiserver process to appear ...
	I0315 07:51:29.538551 3490722 api_server.go:88] waiting for apiserver healthz status ...
	I0315 07:51:29.538596 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0315 07:51:29.538676 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0315 07:51:29.599859 3490722 cri.go:89] found id: "3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465"
	I0315 07:51:29.599884 3490722 cri.go:89] found id: "aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5"
	I0315 07:51:29.599889 3490722 cri.go:89] found id: ""
	I0315 07:51:29.599897 3490722 logs.go:276] 2 containers: [3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465 aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5]
	I0315 07:51:29.599971 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.605961 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.610920 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0315 07:51:29.611054 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0315 07:51:29.685499 3490722 cri.go:89] found id: "3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe"
	I0315 07:51:29.685526 3490722 cri.go:89] found id: "288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf"
	I0315 07:51:29.685530 3490722 cri.go:89] found id: ""
	I0315 07:51:29.685537 3490722 logs.go:276] 2 containers: [3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe 288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf]
	I0315 07:51:29.685610 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.690445 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.695366 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0315 07:51:29.695452 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0315 07:51:29.771248 3490722 cri.go:89] found id: "9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e"
	I0315 07:51:29.771267 3490722 cri.go:89] found id: "7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc"
	I0315 07:51:29.771272 3490722 cri.go:89] found id: ""
	I0315 07:51:29.771279 3490722 logs.go:276] 2 containers: [9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e 7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc]
	I0315 07:51:29.771337 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.776267 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.793698 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0315 07:51:29.793769 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0315 07:51:29.845789 3490722 cri.go:89] found id: "19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423"
	I0315 07:51:29.845851 3490722 cri.go:89] found id: "eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d"
	I0315 07:51:29.845878 3490722 cri.go:89] found id: ""
	I0315 07:51:29.845898 3490722 logs.go:276] 2 containers: [19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423 eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d]
	I0315 07:51:29.845985 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.851019 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.855662 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0315 07:51:29.855846 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0315 07:51:29.914978 3490722 cri.go:89] found id: "24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3"
	I0315 07:51:29.915050 3490722 cri.go:89] found id: "1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439"
	I0315 07:51:29.915095 3490722 cri.go:89] found id: ""
	I0315 07:51:29.915121 3490722 logs.go:276] 2 containers: [24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3 1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439]
	I0315 07:51:29.915205 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.923722 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.927822 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0315 07:51:29.927942 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0315 07:51:29.989735 3490722 cri.go:89] found id: "39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2"
	I0315 07:51:29.989797 3490722 cri.go:89] found id: "6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d"
	I0315 07:51:29.989825 3490722 cri.go:89] found id: ""
	I0315 07:51:29.989846 3490722 logs.go:276] 2 containers: [39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2 6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d]
	I0315 07:51:29.989933 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.994641 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:29.998628 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0315 07:51:29.998750 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0315 07:51:30.121952 3490722 cri.go:89] found id: "d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842"
	I0315 07:51:30.122032 3490722 cri.go:89] found id: "7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e"
	I0315 07:51:30.122054 3490722 cri.go:89] found id: ""
	I0315 07:51:30.122083 3490722 logs.go:276] 2 containers: [d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842 7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e]
	I0315 07:51:30.122196 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:30.127774 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:30.144595 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0315 07:51:30.144766 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0315 07:51:30.219866 3490722 cri.go:89] found id: "141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c"
	I0315 07:51:30.219939 3490722 cri.go:89] found id: ""
	I0315 07:51:30.219962 3490722 logs.go:276] 1 containers: [141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c]
	I0315 07:51:30.220076 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:30.226314 3490722 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0315 07:51:30.226500 3490722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0315 07:51:30.285822 3490722 cri.go:89] found id: "ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803"
	I0315 07:51:30.285848 3490722 cri.go:89] found id: "4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2"
	I0315 07:51:30.285854 3490722 cri.go:89] found id: ""
	I0315 07:51:30.285862 3490722 logs.go:276] 2 containers: [ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803 4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2]
	I0315 07:51:30.285924 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:30.291346 3490722 ssh_runner.go:195] Run: which crictl
	I0315 07:51:30.296522 3490722 logs.go:123] Gathering logs for kubernetes-dashboard [141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c] ...
	I0315 07:51:30.296594 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c"
	I0315 07:51:30.367022 3490722 logs.go:123] Gathering logs for storage-provisioner [4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2] ...
	I0315 07:51:30.367147 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2"
	I0315 07:51:30.420538 3490722 logs.go:123] Gathering logs for containerd ...
	I0315 07:51:30.420567 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0315 07:51:30.496522 3490722 logs.go:123] Gathering logs for container status ...
	I0315 07:51:30.496650 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0315 07:51:30.579010 3490722 logs.go:123] Gathering logs for kube-scheduler [eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d] ...
	I0315 07:51:30.579049 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d"
	I0315 07:51:30.630592 3490722 logs.go:123] Gathering logs for kube-proxy [24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3] ...
	I0315 07:51:30.630626 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3"
	I0315 07:51:30.678109 3490722 logs.go:123] Gathering logs for kindnet [d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842] ...
	I0315 07:51:30.678138 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842"
	I0315 07:51:30.723399 3490722 logs.go:123] Gathering logs for kube-apiserver [aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5] ...
	I0315 07:51:30.723428 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5"
	I0315 07:51:30.797968 3490722 logs.go:123] Gathering logs for dmesg ...
	I0315 07:51:30.798005 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0315 07:51:30.816622 3490722 logs.go:123] Gathering logs for describe nodes ...
	I0315 07:51:30.816653 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0315 07:51:30.972164 3490722 logs.go:123] Gathering logs for kube-apiserver [3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465] ...
	I0315 07:51:30.972197 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465"
	I0315 07:51:31.050833 3490722 logs.go:123] Gathering logs for kindnet [7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e] ...
	I0315 07:51:31.050875 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e"
	I0315 07:51:31.123735 3490722 logs.go:123] Gathering logs for storage-provisioner [ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803] ...
	I0315 07:51:31.123775 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803"
	I0315 07:51:31.190207 3490722 logs.go:123] Gathering logs for coredns [7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc] ...
	I0315 07:51:31.190236 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc"
	I0315 07:51:31.249826 3490722 logs.go:123] Gathering logs for kube-controller-manager [39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2] ...
	I0315 07:51:31.249855 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2"
	I0315 07:51:31.354117 3490722 logs.go:123] Gathering logs for kube-controller-manager [6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d] ...
	I0315 07:51:31.354155 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d"
	I0315 07:51:31.441541 3490722 logs.go:123] Gathering logs for coredns [9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e] ...
	I0315 07:51:31.441581 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e"
	I0315 07:51:31.515474 3490722 logs.go:123] Gathering logs for kube-scheduler [19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423] ...
	I0315 07:51:31.515503 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423"
	I0315 07:51:31.589337 3490722 logs.go:123] Gathering logs for kube-proxy [1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439] ...
	I0315 07:51:31.589367 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439"
	I0315 07:51:31.679207 3490722 logs.go:123] Gathering logs for kubelet ...
	I0315 07:51:31.679237 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0315 07:51:31.765777 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151553     663 reflector.go:138] object-"kube-system"/"metrics-server-token-h2gnq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-h2gnq" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.766009 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151674     663 reflector.go:138] object-"default"/"default-token-82cb9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-82cb9" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.766219 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151731     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.766444 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151795     663 reflector.go:138] object-"kube-system"/"coredns-token-vw5pl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-vw5pl" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.766666 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151873     663 reflector.go:138] object-"kube-system"/"kindnet-token-jrrqr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jrrqr" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.766881 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.151952     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-vqzhx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-vqzhx" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.767126 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.152014     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.767352 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:56 old-k8s-version-591842 kubelet[663]: E0315 07:45:56.152079     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-hnqzn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-hnqzn" is forbidden: User "system:node:old-k8s-version-591842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-591842' and this object
	W0315 07:51:31.777873 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:58 old-k8s-version-591842 kubelet[663]: E0315 07:45:58.854597     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:31.778081 3490722 logs.go:138] Found kubelet problem: Mar 15 07:45:59 old-k8s-version-591842 kubelet[663]: E0315 07:45:59.802434     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.784967 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:14 old-k8s-version-591842 kubelet[663]: E0315 07:46:14.653468     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:31.787050 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:20 old-k8s-version-591842 kubelet[663]: E0315 07:46:20.888372     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.788741 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:21 old-k8s-version-591842 kubelet[663]: E0315 07:46:21.893270     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.789087 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:22 old-k8s-version-591842 kubelet[663]: E0315 07:46:22.992818     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.789271 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:25 old-k8s-version-591842 kubelet[663]: E0315 07:46:25.647507     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.790036 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:30 old-k8s-version-591842 kubelet[663]: E0315 07:46:30.928822     663 pod_workers.go:191] Error syncing pod 1b02bdb3-5934-4002-980c-769d1de68357 ("storage-provisioner_kube-system(1b02bdb3-5934-4002-980c-769d1de68357)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1b02bdb3-5934-4002-980c-769d1de68357)"
	W0315 07:51:31.790614 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:35 old-k8s-version-591842 kubelet[663]: E0315 07:46:35.946453     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.793030 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:37 old-k8s-version-591842 kubelet[663]: E0315 07:46:37.659626     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:31.793817 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:42 old-k8s-version-591842 kubelet[663]: E0315 07:46:42.993141     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.794004 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:48 old-k8s-version-591842 kubelet[663]: E0315 07:46:48.651057     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.794583 3490722 logs.go:138] Found kubelet problem: Mar 15 07:46:58 old-k8s-version-591842 kubelet[663]: E0315 07:46:58.015895     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.794910 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:02 old-k8s-version-591842 kubelet[663]: E0315 07:47:02.992859     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	I0315 07:51:29.002553 3500589 out.go:204]   - Booting up control plane ...
	I0315 07:51:29.002670 3500589 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 07:51:29.009978 3500589 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 07:51:29.010073 3500589 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 07:51:29.022602 3500589 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 07:51:29.023683 3500589 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 07:51:29.023922 3500589 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 07:51:29.137586 3500589 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	W0315 07:51:31.797506 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:03 old-k8s-version-591842 kubelet[663]: E0315 07:47:03.649062     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.800132 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:15 old-k8s-version-591842 kubelet[663]: E0315 07:47:15.644773     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.800463 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:16 old-k8s-version-591842 kubelet[663]: E0315 07:47:16.645024     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.802857 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:29 old-k8s-version-591842 kubelet[663]: E0315 07:47:29.656112     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:31.803187 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:30 old-k8s-version-591842 kubelet[663]: E0315 07:47:30.644347     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.803376 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:42 old-k8s-version-591842 kubelet[663]: E0315 07:47:42.645173     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.803959 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:45 old-k8s-version-591842 kubelet[663]: E0315 07:47:45.161682     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.804281 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:52 old-k8s-version-591842 kubelet[663]: E0315 07:47:52.992815     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.804463 3490722 logs.go:138] Found kubelet problem: Mar 15 07:47:54 old-k8s-version-591842 kubelet[663]: E0315 07:47:54.645803     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.804784 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:06 old-k8s-version-591842 kubelet[663]: E0315 07:48:06.644489     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.804966 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:06 old-k8s-version-591842 kubelet[663]: E0315 07:48:06.645236     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.805298 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:18 old-k8s-version-591842 kubelet[663]: E0315 07:48:18.644500     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.805483 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:18 old-k8s-version-591842 kubelet[663]: E0315 07:48:18.645259     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.805664 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:29 old-k8s-version-591842 kubelet[663]: E0315 07:48:29.644702     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.805987 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:33 old-k8s-version-591842 kubelet[663]: E0315 07:48:33.645925     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.806168 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:42 old-k8s-version-591842 kubelet[663]: E0315 07:48:42.644688     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.806493 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:45 old-k8s-version-591842 kubelet[663]: E0315 07:48:45.644849     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.808911 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:57 old-k8s-version-591842 kubelet[663]: E0315 07:48:57.654804     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0315 07:51:31.809237 3490722 logs.go:138] Found kubelet problem: Mar 15 07:48:59 old-k8s-version-591842 kubelet[663]: E0315 07:48:59.644341     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.809453 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:09 old-k8s-version-591842 kubelet[663]: E0315 07:49:09.653348     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.810032 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:14 old-k8s-version-591842 kubelet[663]: E0315 07:49:14.388467     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.810354 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:22 old-k8s-version-591842 kubelet[663]: E0315 07:49:22.993103     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.810536 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:23 old-k8s-version-591842 kubelet[663]: E0315 07:49:23.645937     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.810717 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:35 old-k8s-version-591842 kubelet[663]: E0315 07:49:35.645912     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.811038 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:36 old-k8s-version-591842 kubelet[663]: E0315 07:49:36.644319     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.813162 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:47 old-k8s-version-591842 kubelet[663]: E0315 07:49:47.658879     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.813501 3490722 logs.go:138] Found kubelet problem: Mar 15 07:49:50 old-k8s-version-591842 kubelet[663]: E0315 07:49:50.644297     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.813686 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:02 old-k8s-version-591842 kubelet[663]: E0315 07:50:02.644922     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.814013 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:05 old-k8s-version-591842 kubelet[663]: E0315 07:50:05.644906     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.814335 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:17 old-k8s-version-591842 kubelet[663]: E0315 07:50:17.645285     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.814518 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:17 old-k8s-version-591842 kubelet[663]: E0315 07:50:17.647267     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.814841 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:28 old-k8s-version-591842 kubelet[663]: E0315 07:50:28.644332     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.815023 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:29 old-k8s-version-591842 kubelet[663]: E0315 07:50:29.645326     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.816134 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:42 old-k8s-version-591842 kubelet[663]: E0315 07:50:42.644973     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.816475 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:43 old-k8s-version-591842 kubelet[663]: E0315 07:50:43.644572     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.816660 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.644836     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.816996 3490722 logs.go:138] Found kubelet problem: Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.645785     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.817320 3490722 logs.go:138] Found kubelet problem: Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.645648     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.817501 3490722 logs.go:138] Found kubelet problem: Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.651347     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.817684 3490722 logs.go:138] Found kubelet problem: Mar 15 07:51:19 old-k8s-version-591842 kubelet[663]: E0315 07:51:19.648408     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.818007 3490722 logs.go:138] Found kubelet problem: Mar 15 07:51:21 old-k8s-version-591842 kubelet[663]: E0315 07:51:21.644468     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	I0315 07:51:31.818018 3490722 logs.go:123] Gathering logs for etcd [3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe] ...
	I0315 07:51:31.818032 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe"
	I0315 07:51:31.903966 3490722 logs.go:123] Gathering logs for etcd [288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf] ...
	I0315 07:51:31.903997 3490722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf"
	I0315 07:51:31.986316 3490722 out.go:304] Setting ErrFile to fd 2...
	I0315 07:51:31.986344 3490722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0315 07:51:31.986390 3490722 out.go:239] X Problems detected in kubelet:
	W0315 07:51:31.986404 3490722 out.go:239]   Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.645785     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.986414 3490722 out.go:239]   Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.645648     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	W0315 07:51:31.986428 3490722 out.go:239]   Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.651347     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.986435 3490722 out.go:239]   Mar 15 07:51:19 old-k8s-version-591842 kubelet[663]: E0315 07:51:19.648408     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0315 07:51:31.986449 3490722 out.go:239]   Mar 15 07:51:21 old-k8s-version-591842 kubelet[663]: E0315 07:51:21.644468     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	I0315 07:51:31.986457 3490722 out.go:304] Setting ErrFile to fd 2...
	I0315 07:51:31.986464 3490722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:51:38.141417 3500589 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.004204 seconds
	I0315 07:51:38.141546 3500589 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 07:51:38.161747 3500589 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 07:51:38.689025 3500589 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 07:51:38.689223 3500589 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-722347 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 07:51:39.202430 3500589 kubeadm.go:309] [bootstrap-token] Using token: 2gd82e.ximahph8csemur3g
	I0315 07:51:39.204554 3500589 out.go:204]   - Configuring RBAC rules ...
	I0315 07:51:39.204685 3500589 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 07:51:39.213025 3500589 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 07:51:39.226347 3500589 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 07:51:39.230690 3500589 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 07:51:39.235258 3500589 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 07:51:39.242218 3500589 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 07:51:39.256887 3500589 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 07:51:39.513255 3500589 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 07:51:39.626946 3500589 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 07:51:39.628881 3500589 kubeadm.go:309] 
	I0315 07:51:39.628951 3500589 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 07:51:39.628962 3500589 kubeadm.go:309] 
	I0315 07:51:39.629037 3500589 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 07:51:39.629042 3500589 kubeadm.go:309] 
	I0315 07:51:39.629066 3500589 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 07:51:39.629123 3500589 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 07:51:39.629172 3500589 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 07:51:39.629176 3500589 kubeadm.go:309] 
	I0315 07:51:39.629227 3500589 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 07:51:39.629231 3500589 kubeadm.go:309] 
	I0315 07:51:39.629277 3500589 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 07:51:39.629281 3500589 kubeadm.go:309] 
	I0315 07:51:39.629331 3500589 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 07:51:39.629425 3500589 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 07:51:39.629492 3500589 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 07:51:39.629496 3500589 kubeadm.go:309] 
	I0315 07:51:39.629822 3500589 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 07:51:39.629903 3500589 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 07:51:39.629908 3500589 kubeadm.go:309] 
	I0315 07:51:39.629988 3500589 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2gd82e.ximahph8csemur3g \
	I0315 07:51:39.630086 3500589 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c1e97d56565bc0beab8ad4377b38bf3319ec6c746cc5fae6ed0032cea307c48a \
	I0315 07:51:39.630107 3500589 kubeadm.go:309] 	--control-plane 
	I0315 07:51:39.630111 3500589 kubeadm.go:309] 
	I0315 07:51:39.630203 3500589 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 07:51:39.630209 3500589 kubeadm.go:309] 
	I0315 07:51:39.630484 3500589 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2gd82e.ximahph8csemur3g \
	I0315 07:51:39.630588 3500589 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c1e97d56565bc0beab8ad4377b38bf3319ec6c746cc5fae6ed0032cea307c48a 
	I0315 07:51:39.635513 3500589 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0315 07:51:39.635627 3500589 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 07:51:39.635972 3500589 cni.go:84] Creating CNI manager for ""
	I0315 07:51:39.635985 3500589 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0315 07:51:39.638532 3500589 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0315 07:51:39.640686 3500589 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0315 07:51:39.652601 3500589 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0315 07:51:39.652620 3500589 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0315 07:51:39.691276 3500589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0315 07:51:40.876088 3500589 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.184772671s)
	I0315 07:51:40.876122 3500589 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 07:51:40.876232 3500589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:51:40.876305 3500589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-722347 minikube.k8s.io/updated_at=2024_03_15T07_51_40_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56 minikube.k8s.io/name=embed-certs-722347 minikube.k8s.io/primary=true
	I0315 07:51:41.051910 3500589 ops.go:34] apiserver oom_adj: -16
	I0315 07:51:41.052010 3500589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:51:41.552087 3500589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 07:51:41.987133 3490722 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0315 07:51:41.998532 3490722 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0315 07:51:42.008114 3490722 out.go:177] 
	W0315 07:51:42.013913 3490722 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0315 07:51:42.013959 3490722 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0315 07:51:42.013981 3490722 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0315 07:51:42.013988 3490722 out.go:239] * 
	W0315 07:51:42.015597 3490722 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 07:51:42.018503 3490722 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	c8441ee92457c       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   73a8bde3ab233       dashboard-metrics-scraper-8d5bb5db8-2wnk8
	ae03a9797792d       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         3                   4e15d2dd71844       storage-provisioner
	141c1019fad2f       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   f5c3e831f3b38       kubernetes-dashboard-cd95d586-7fkx2
	4518d148ada79       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         2                   4e15d2dd71844       storage-provisioner
	9a5842dfbaea4       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   07b030b94131a       coredns-74ff55c5b-5zhc9
	24cc1f1e367cb       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   73bb07e1652b8       kube-proxy-pdn2n
	d3767d4efec19       4740c1948d3fc       5 minutes ago       Running             kindnet-cni                 1                   7f6797fa1f610       kindnet-9mqv4
	1cb8a13f25e13       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   1cc07abf2b05a       busybox
	3e9f3d24aea76       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   c1db48ada60e0       etcd-old-k8s-version-591842
	3aee10dec5ed0       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   6cf2f921c3cdf       kube-apiserver-old-k8s-version-591842
	39d3647417de7       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   aa4a3ddf943cb       kube-controller-manager-old-k8s-version-591842
	19906a36a5bc6       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   2745bd156d2e4       kube-scheduler-old-k8s-version-591842
	4cf010663380e       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   521cfec301725       busybox
	7d25cff184892       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   bf3adcb851ee9       coredns-74ff55c5b-5zhc9
	1909102cb4c31       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   5209bf3170a14       kube-proxy-pdn2n
	7113762b904f4       4740c1948d3fc       8 minutes ago       Exited              kindnet-cni                 0                   c2b86ece2a094       kindnet-9mqv4
	6e6c79cda8326       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   25782a2d7ba87       kube-controller-manager-old-k8s-version-591842
	eb46f252b98e7       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   cecb29930549c       kube-scheduler-old-k8s-version-591842
	aa813eda9f1da       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   c3e742b203849       kube-apiserver-old-k8s-version-591842
	288c914996f04       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   7616c9e23ef1d       etcd-old-k8s-version-591842
	
	
	==> containerd <==
	Mar 15 07:47:29 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:47:29.653100255Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 15 07:47:29 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:47:29.654734260Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Mar 15 07:47:44 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:47:44.647966996Z" level=info msg="CreateContainer within sandbox \"73a8bde3ab233fe4898e2abd1a3f4732846c8aa764491c9f3231263c8457dfe7\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,}"
	Mar 15 07:47:44 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:47:44.680324893Z" level=info msg="CreateContainer within sandbox \"73a8bde3ab233fe4898e2abd1a3f4732846c8aa764491c9f3231263c8457dfe7\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"ac754e0933e1f658cc96c097e5a23e5e08c460ba63919802a82dcc883556a77c\""
	Mar 15 07:47:44 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:47:44.681219568Z" level=info msg="StartContainer for \"ac754e0933e1f658cc96c097e5a23e5e08c460ba63919802a82dcc883556a77c\""
	Mar 15 07:47:44 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:47:44.745003617Z" level=info msg="StartContainer for \"ac754e0933e1f658cc96c097e5a23e5e08c460ba63919802a82dcc883556a77c\" returns successfully"
	Mar 15 07:47:44 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:47:44.772954298Z" level=info msg="shim disconnected" id=ac754e0933e1f658cc96c097e5a23e5e08c460ba63919802a82dcc883556a77c
	Mar 15 07:47:44 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:47:44.773029931Z" level=warning msg="cleaning up after shim disconnected" id=ac754e0933e1f658cc96c097e5a23e5e08c460ba63919802a82dcc883556a77c namespace=k8s.io
	Mar 15 07:47:44 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:47:44.773041541Z" level=info msg="cleaning up dead shim"
	Mar 15 07:47:44 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:47:44.781705257Z" level=warning msg="cleanup warnings time=\"2024-03-15T07:47:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2970 runtime=io.containerd.runc.v2\n"
	Mar 15 07:47:45 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:47:45.167044881Z" level=info msg="RemoveContainer for \"bbb7bdf5ec489800fb498e39fb48eccfc81007bb9eee3a36f4ab67d9c3a32d30\""
	Mar 15 07:47:45 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:47:45.185763466Z" level=info msg="RemoveContainer for \"bbb7bdf5ec489800fb498e39fb48eccfc81007bb9eee3a36f4ab67d9c3a32d30\" returns successfully"
	Mar 15 07:48:57 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:48:57.646459680Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 15 07:48:57 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:48:57.652857438Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 15 07:48:57 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:48:57.654431899Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Mar 15 07:49:13 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:49:13.647061699Z" level=info msg="CreateContainer within sandbox \"73a8bde3ab233fe4898e2abd1a3f4732846c8aa764491c9f3231263c8457dfe7\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Mar 15 07:49:13 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:49:13.662441696Z" level=info msg="CreateContainer within sandbox \"73a8bde3ab233fe4898e2abd1a3f4732846c8aa764491c9f3231263c8457dfe7\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"c8441ee92457c869613120768699a33ceb84581653db7649198ccb9e71d8c473\""
	Mar 15 07:49:13 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:49:13.662855315Z" level=info msg="StartContainer for \"c8441ee92457c869613120768699a33ceb84581653db7649198ccb9e71d8c473\""
	Mar 15 07:49:13 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:49:13.727041490Z" level=info msg="StartContainer for \"c8441ee92457c869613120768699a33ceb84581653db7649198ccb9e71d8c473\" returns successfully"
	Mar 15 07:49:13 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:49:13.771950617Z" level=info msg="shim disconnected" id=c8441ee92457c869613120768699a33ceb84581653db7649198ccb9e71d8c473
	Mar 15 07:49:13 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:49:13.772011744Z" level=warning msg="cleaning up after shim disconnected" id=c8441ee92457c869613120768699a33ceb84581653db7649198ccb9e71d8c473 namespace=k8s.io
	Mar 15 07:49:13 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:49:13.772022804Z" level=info msg="cleaning up dead shim"
	Mar 15 07:49:13 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:49:13.780634074Z" level=warning msg="cleanup warnings time=\"2024-03-15T07:49:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3225 runtime=io.containerd.runc.v2\n"
	Mar 15 07:49:14 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:49:14.405385669Z" level=info msg="RemoveContainer for \"ac754e0933e1f658cc96c097e5a23e5e08c460ba63919802a82dcc883556a77c\""
	Mar 15 07:49:14 old-k8s-version-591842 containerd[568]: time="2024-03-15T07:49:14.410990904Z" level=info msg="RemoveContainer for \"ac754e0933e1f658cc96c097e5a23e5e08c460ba63919802a82dcc883556a77c\" returns successfully"
	
	
	==> coredns [7d25cff18489226da998ecaf53342d2b701293554532a64fa95727207acd81dc] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:37629 - 62494 "HINFO IN 7250847379127815357.2933103892824746515. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021557872s
	
	
	==> coredns [9a5842dfbaea4b23c98bf7b430f498be93e7775283a1f0649b405d63584f2d1e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:42605 - 47529 "HINFO IN 9206751975402680439.6531928222192876371. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020469972s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0315 07:46:30.600120       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-15 07:46:00.599506861 +0000 UTC m=+0.022877047) (total time: 30.000517334s):
	Trace[2019727887]: [30.000517334s] [30.000517334s] END
	E0315 07:46:30.600162       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0315 07:46:30.601146       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-15 07:46:00.60072161 +0000 UTC m=+0.024091796) (total time: 30.000406666s):
	Trace[939984059]: [30.000406666s] [30.000406666s] END
	E0315 07:46:30.601165       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0315 07:46:30.601932       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-15 07:46:00.601617441 +0000 UTC m=+0.024987635) (total time: 30.000299387s):
	Trace[1474941318]: [30.000299387s] [30.000299387s] END
	E0315 07:46:30.601950       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-591842
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-591842
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eb91ca077853a04b88fe45b419a329f4c2efcc56
	                    minikube.k8s.io/name=old-k8s-version-591842
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T07_42_57_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 07:42:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-591842
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 07:51:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 07:46:56 +0000   Fri, 15 Mar 2024 07:42:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 07:46:56 +0000   Fri, 15 Mar 2024 07:42:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 07:46:56 +0000   Fri, 15 Mar 2024 07:42:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 07:46:56 +0000   Fri, 15 Mar 2024 07:43:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-591842
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 682867af244d4aed830cdfbe1067ba7b
	  System UUID:                6b20f7fb-06d7-4232-92df-bac481c756ba
	  Boot ID:                    be4a23ea-b3ea-44f1-92fd-06f8e96fb1b3
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 coredns-74ff55c5b-5zhc9                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m31s
	  kube-system                 etcd-old-k8s-version-591842                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m38s
	  kube-system                 kindnet-9mqv4                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m31s
	  kube-system                 kube-apiserver-old-k8s-version-591842             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 kube-controller-manager-old-k8s-version-591842    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 kube-proxy-pdn2n                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kube-system                 kube-scheduler-old-k8s-version-591842             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 metrics-server-9975d5f86-9j72g                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m24s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-2wnk8         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-7fkx2               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m57s (x5 over 8m57s)  kubelet     Node old-k8s-version-591842 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m57s (x5 over 8m57s)  kubelet     Node old-k8s-version-591842 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m57s (x5 over 8m57s)  kubelet     Node old-k8s-version-591842 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m57s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 8m38s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m38s                  kubelet     Node old-k8s-version-591842 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m38s                  kubelet     Node old-k8s-version-591842 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m38s                  kubelet     Node old-k8s-version-591842 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m38s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m31s                  kubelet     Node old-k8s-version-591842 status is now: NodeReady
	  Normal  Starting                 8m29s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m56s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-591842 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s (x7 over 5m56s)  kubelet     Node old-k8s-version-591842 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-591842 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m43s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001081] FS-Cache: O-key=[8] '10713b0000000000'
	[  +0.000757] FS-Cache: N-cookie c=000000d2 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.001017] FS-Cache: N-cookie d=000000007332a028{9p.inode} n=00000000c74f53c1
	[  +0.001220] FS-Cache: N-key=[8] '10713b0000000000'
	[  +0.003939] FS-Cache: Duplicate cookie detected
	[  +0.000819] FS-Cache: O-cookie c=000000cc [p=000000c9 fl=226 nc=0 na=1]
	[  +0.001156] FS-Cache: O-cookie d=000000007332a028{9p.inode} n=000000005de1f097
	[  +0.001196] FS-Cache: O-key=[8] '10713b0000000000'
	[  +0.000843] FS-Cache: N-cookie c=000000d3 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.001015] FS-Cache: N-cookie d=000000007332a028{9p.inode} n=000000008b64c896
	[  +0.001181] FS-Cache: N-key=[8] '10713b0000000000'
	[  +3.002798] FS-Cache: Duplicate cookie detected
	[  +0.000747] FS-Cache: O-cookie c=000000ca [p=000000c9 fl=226 nc=0 na=1]
	[  +0.001129] FS-Cache: O-cookie d=000000007332a028{9p.inode} n=000000007ad487f1
	[  +0.001202] FS-Cache: O-key=[8] '0f713b0000000000'
	[  +0.000802] FS-Cache: N-cookie c=000000d5 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.001168] FS-Cache: N-cookie d=000000007332a028{9p.inode} n=000000005145f6ae
	[  +0.001258] FS-Cache: N-key=[8] '0f713b0000000000'
	[  +0.293404] FS-Cache: Duplicate cookie detected
	[  +0.000810] FS-Cache: O-cookie c=000000cf [p=000000c9 fl=226 nc=0 na=1]
	[  +0.001101] FS-Cache: O-cookie d=000000007332a028{9p.inode} n=000000005b55082f
	[  +0.001118] FS-Cache: O-key=[8] '15713b0000000000'
	[  +0.000729] FS-Cache: N-cookie c=000000d6 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000949] FS-Cache: N-cookie d=000000007332a028{9p.inode} n=00000000c74f53c1
	[  +0.001079] FS-Cache: N-key=[8] '15713b0000000000'
	
	
	==> etcd [288c914996f04026c4b46ed1ae770f4350127a00f6f762dc1f2c827e02e963bf] <==
	raft2024/03/15 07:42:47 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/03/15 07:42:47 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/03/15 07:42:47 INFO: ea7e25599daad906 became leader at term 2
	raft2024/03/15 07:42:47 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-03-15 07:42:47.946052 I | etcdserver: setting up the initial cluster version to 3.4
	2024-03-15 07:42:47.947148 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-03-15 07:42:47.947318 I | etcdserver/api: enabled capabilities for version 3.4
	2024-03-15 07:42:47.947410 I | etcdserver: published {Name:old-k8s-version-591842 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-03-15 07:42:47.947612 I | embed: ready to serve client requests
	2024-03-15 07:42:47.948379 I | embed: ready to serve client requests
	2024-03-15 07:42:47.953339 I | embed: serving client requests on 192.168.76.2:2379
	2024-03-15 07:42:47.987902 I | embed: serving client requests on 127.0.0.1:2379
	2024-03-15 07:43:11.153334 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:43:20.867847 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:43:30.868030 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:43:40.867855 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:43:50.868056 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:44:00.868059 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:44:10.867827 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:44:20.868141 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:44:30.868075 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:44:40.868019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:44:50.868061 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:45:00.868195 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:45:10.869116 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [3e9f3d24aea76b58fc8b3cc405e7a29d5223e2175c00b4ea4e8698e262cc71fe] <==
	2024-03-15 07:47:43.262444 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:47:53.262456 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:48:03.262521 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:48:13.262376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:48:23.262501 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:48:33.262762 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:48:43.262537 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:48:53.262403 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:49:03.262369 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:49:13.262460 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:49:23.262356 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:49:33.262367 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:49:43.262409 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:49:53.262541 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:50:03.262290 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:50:13.263042 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:50:23.262355 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:50:33.262422 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:50:43.262438 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:50:53.262486 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:51:03.262428 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:51:13.262347 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:51:23.262595 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:51:33.262520 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-15 07:51:43.262523 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 07:51:43 up 16:34,  0 users,  load average: 1.93, 1.94, 2.54
	Linux old-k8s-version-591842 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [7113762b904f49fb25d8457f9978706dc59279f68df6e804674cc618fc850a4e] <==
	podIP = 192.168.76.2
	I0315 07:43:13.016342       1 main.go:116] setting mtu 1500 for CNI 
	I0315 07:43:13.016366       1 main.go:146] kindnetd IP family: "ipv4"
	I0315 07:43:13.016378       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0315 07:43:43.244220       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0315 07:43:43.308320       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:43:43.308355       1 main.go:227] handling current node
	I0315 07:43:53.342015       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:43:53.342225       1 main.go:227] handling current node
	I0315 07:44:03.371677       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:44:03.371708       1 main.go:227] handling current node
	I0315 07:44:13.375796       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:44:13.375822       1 main.go:227] handling current node
	I0315 07:44:23.409781       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:44:23.409981       1 main.go:227] handling current node
	I0315 07:44:33.441300       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:44:33.441331       1 main.go:227] handling current node
	I0315 07:44:43.451170       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:44:43.451200       1 main.go:227] handling current node
	I0315 07:44:53.483519       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:44:53.483712       1 main.go:227] handling current node
	I0315 07:45:03.528812       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:45:03.528842       1 main.go:227] handling current node
	I0315 07:45:13.549196       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:45:13.549393       1 main.go:227] handling current node
	
	
	==> kindnet [d3767d4efec190990e83e3395e150808bcbe00b98bac2746ed31bbba3417e842] <==
	I0315 07:49:39.118817       1 main.go:227] handling current node
	I0315 07:49:49.127917       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:49:49.127949       1 main.go:227] handling current node
	I0315 07:49:59.136359       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:49:59.136387       1 main.go:227] handling current node
	I0315 07:50:09.151701       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:50:09.151730       1 main.go:227] handling current node
	I0315 07:50:19.155498       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:50:19.155528       1 main.go:227] handling current node
	I0315 07:50:29.166180       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:50:29.166284       1 main.go:227] handling current node
	I0315 07:50:39.175413       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:50:39.175442       1 main.go:227] handling current node
	I0315 07:50:49.186432       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:50:49.186460       1 main.go:227] handling current node
	I0315 07:50:59.194189       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:50:59.194219       1 main.go:227] handling current node
	I0315 07:51:09.207493       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:51:09.207691       1 main.go:227] handling current node
	I0315 07:51:19.223738       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:51:19.223777       1 main.go:227] handling current node
	I0315 07:51:29.229264       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:51:29.229490       1 main.go:227] handling current node
	I0315 07:51:39.241009       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0315 07:51:39.241140       1 main.go:227] handling current node
	
	
	==> kube-apiserver [3aee10dec5ed07dafc19f5a933fa5ded3a656c5035a025358b33b9e64d0af465] <==
	I0315 07:48:17.528052       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0315 07:48:17.528061       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0315 07:48:51.829985       1 client.go:360] parsed scheme: "passthrough"
	I0315 07:48:51.830062       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0315 07:48:51.830071       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0315 07:48:58.985569       1 handler_proxy.go:102] no RequestInfo found in the context
	E0315 07:48:58.985639       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:48:58.985649       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0315 07:49:24.595911       1 client.go:360] parsed scheme: "passthrough"
	I0315 07:49:24.595952       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0315 07:49:24.595961       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0315 07:49:57.879448       1 client.go:360] parsed scheme: "passthrough"
	I0315 07:49:57.879733       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0315 07:49:57.879843       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0315 07:50:41.122869       1 client.go:360] parsed scheme: "passthrough"
	I0315 07:50:41.122912       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0315 07:50:41.122921       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0315 07:50:57.177921       1 handler_proxy.go:102] no RequestInfo found in the context
	E0315 07:50:57.177984       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 07:50:57.177994       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0315 07:51:18.743244       1 client.go:360] parsed scheme: "passthrough"
	I0315 07:51:18.743291       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0315 07:51:18.743301       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [aa813eda9f1da5af9142848c19fa43cf2a6a68098ca63c5783479ec6681835d5] <==
	I0315 07:42:54.484830       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0315 07:42:54.484860       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0315 07:42:54.500043       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0315 07:42:54.503977       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0315 07:42:54.504001       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0315 07:42:54.932842       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0315 07:42:54.968837       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0315 07:42:55.082404       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0315 07:42:55.083876       1 controller.go:606] quota admission added evaluator for: endpoints
	I0315 07:42:55.097974       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0315 07:42:56.129426       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0315 07:42:56.805673       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0315 07:42:56.853017       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0315 07:43:05.324574       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0315 07:43:12.072205       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0315 07:43:12.136280       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0315 07:43:32.816419       1 client.go:360] parsed scheme: "passthrough"
	I0315 07:43:32.816463       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0315 07:43:32.816471       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0315 07:44:10.772752       1 client.go:360] parsed scheme: "passthrough"
	I0315 07:44:10.772793       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0315 07:44:10.772890       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0315 07:44:40.924613       1 client.go:360] parsed scheme: "passthrough"
	I0315 07:44:40.924654       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0315 07:44:40.924662       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [39d3647417de7d1e56cd0022025c95051ea33f8094101c1d1e18fb578da8dac2] <==
	W0315 07:47:20.553708       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0315 07:47:46.606394       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0315 07:47:52.204196       1 request.go:655] Throttling request took 1.048280382s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0315 07:47:53.055612       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0315 07:48:17.108306       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0315 07:48:24.706208       1 request.go:655] Throttling request took 1.048848825s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0315 07:48:25.557699       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0315 07:48:47.610182       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0315 07:48:57.208218       1 request.go:655] Throttling request took 1.04838658s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0315 07:48:58.059783       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0315 07:49:18.112092       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0315 07:49:29.710344       1 request.go:655] Throttling request took 1.048332514s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0315 07:49:30.561729       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0315 07:49:48.613884       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0315 07:50:02.212142       1 request.go:655] Throttling request took 1.047923232s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0315 07:50:03.064166       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0315 07:50:19.115614       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0315 07:50:34.714836       1 request.go:655] Throttling request took 1.048421865s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0315 07:50:35.566502       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0315 07:50:49.617318       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0315 07:51:07.216979       1 request.go:655] Throttling request took 1.048037246s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0315 07:51:08.068707       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0315 07:51:20.119737       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0315 07:51:39.720896       1 request.go:655] Throttling request took 1.048288583s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0315 07:51:40.572580       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [6e6c79cda832629dbbd2f62624a876664c7a5c6d3be1a6dda27e404d9b60545d] <==
	I0315 07:43:12.140154       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0315 07:43:12.140181       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0315 07:43:12.140420       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0315 07:43:12.150081       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0315 07:43:12.156725       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0315 07:43:12.185070       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-591842" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 07:43:12.187626       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0315 07:43:12.213019       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9mqv4"
	I0315 07:43:12.237945       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pdn2n"
	I0315 07:43:12.318945       1 shared_informer.go:247] Caches are synced for resource quota 
	I0315 07:43:12.319015       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0315 07:43:12.359630       1 shared_informer.go:247] Caches are synced for resource quota 
	I0315 07:43:12.362641       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	E0315 07:43:12.420673       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"007fb872-eba2-4195-bdfd-80d8e1a0764b", ResourceVersion:"279", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63846085376, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001937ac0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001937ae0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001937b00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001800280), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001937
b20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001937b40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001937b80)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40017bcb40), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40016b1e38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a7a380), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40008efa20)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40016b1e88)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0315 07:43:12.453737       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0315 07:43:12.733669       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0315 07:43:12.733695       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0315 07:43:12.753958       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0315 07:43:13.270380       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0315 07:43:13.294407       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-jxqhl"
	I0315 07:43:17.103365       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0315 07:45:18.467637       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0315 07:45:18.546005       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0315 07:45:18.600297       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0315 07:45:18.683249       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [1909102cb4c316a85c844de34fe80392e27c7c27b923245563422bfbd4f97439] <==
	I0315 07:43:14.300222       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0315 07:43:14.300317       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0315 07:43:14.319706       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0315 07:43:14.319817       1 server_others.go:185] Using iptables Proxier.
	I0315 07:43:14.320026       1 server.go:650] Version: v1.20.0
	I0315 07:43:14.320658       1 config.go:315] Starting service config controller
	I0315 07:43:14.320722       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0315 07:43:14.320758       1 config.go:224] Starting endpoint slice config controller
	I0315 07:43:14.323755       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0315 07:43:14.423179       1 shared_informer.go:247] Caches are synced for service config 
	I0315 07:43:14.423953       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [24cc1f1e367cbeae35948802bbf43f632ee4bd55c039a00d8c5a76566109f2c3] <==
	I0315 07:46:00.660048       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0315 07:46:00.661929       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0315 07:46:00.683993       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0315 07:46:00.684239       1 server_others.go:185] Using iptables Proxier.
	I0315 07:46:00.684624       1 server.go:650] Version: v1.20.0
	I0315 07:46:00.685459       1 config.go:315] Starting service config controller
	I0315 07:46:00.685485       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0315 07:46:00.685505       1 config.go:224] Starting endpoint slice config controller
	I0315 07:46:00.685628       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0315 07:46:00.785608       1 shared_informer.go:247] Caches are synced for service config 
	I0315 07:46:00.785809       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [19906a36a5bc65fa9053fa523d8e7bd5e048a857bef608d62d4b59e39df92423] <==
	I0315 07:45:51.336575       1 serving.go:331] Generated self-signed cert in-memory
	W0315 07:45:56.153108       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0315 07:45:56.153150       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 07:45:56.153206       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0315 07:45:56.153212       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0315 07:45:56.408405       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0315 07:45:56.408923       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 07:45:56.408932       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 07:45:56.408947       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0315 07:45:56.510913       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [eb46f252b98e775a9323a10c057054ca3ab5d7e9f5f3fc457cb1cc0b0674781d] <==
	W0315 07:42:53.617683       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0315 07:42:53.617717       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 07:42:53.617731       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0315 07:42:53.617737       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0315 07:42:53.725308       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0315 07:42:53.725663       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 07:42:53.727737       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 07:42:53.728388       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0315 07:42:53.757539       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 07:42:53.757870       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 07:42:53.758098       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 07:42:53.758294       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 07:42:53.758526       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0315 07:42:53.766252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 07:42:53.768309       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 07:42:53.769726       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 07:42:53.770258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 07:42:53.771037       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 07:42:53.771319       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 07:42:53.774165       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 07:42:54.660551       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 07:42:54.729354       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 07:42:54.776833       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 07:42:54.778625       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0315 07:42:55.228339       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Mar 15 07:50:17 old-k8s-version-591842 kubelet[663]: I0315 07:50:17.644466     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c8441ee92457c869613120768699a33ceb84581653db7649198ccb9e71d8c473
	Mar 15 07:50:17 old-k8s-version-591842 kubelet[663]: E0315 07:50:17.645285     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	Mar 15 07:50:17 old-k8s-version-591842 kubelet[663]: E0315 07:50:17.647267     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 15 07:50:28 old-k8s-version-591842 kubelet[663]: I0315 07:50:28.643990     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c8441ee92457c869613120768699a33ceb84581653db7649198ccb9e71d8c473
	Mar 15 07:50:28 old-k8s-version-591842 kubelet[663]: E0315 07:50:28.644332     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	Mar 15 07:50:29 old-k8s-version-591842 kubelet[663]: E0315 07:50:29.645326     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 15 07:50:42 old-k8s-version-591842 kubelet[663]: E0315 07:50:42.644973     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 15 07:50:43 old-k8s-version-591842 kubelet[663]: I0315 07:50:43.644084     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c8441ee92457c869613120768699a33ceb84581653db7649198ccb9e71d8c473
	Mar 15 07:50:43 old-k8s-version-591842 kubelet[663]: E0315 07:50:43.644572     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.644836     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: I0315 07:50:54.644988     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c8441ee92457c869613120768699a33ceb84581653db7649198ccb9e71d8c473
	Mar 15 07:50:54 old-k8s-version-591842 kubelet[663]: E0315 07:50:54.645785     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: I0315 07:51:07.644862     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c8441ee92457c869613120768699a33ceb84581653db7649198ccb9e71d8c473
	Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.645648     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	Mar 15 07:51:07 old-k8s-version-591842 kubelet[663]: E0315 07:51:07.651347     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 15 07:51:19 old-k8s-version-591842 kubelet[663]: E0315 07:51:19.648408     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 15 07:51:21 old-k8s-version-591842 kubelet[663]: I0315 07:51:21.644069     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c8441ee92457c869613120768699a33ceb84581653db7649198ccb9e71d8c473
	Mar 15 07:51:21 old-k8s-version-591842 kubelet[663]: E0315 07:51:21.644468     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	Mar 15 07:51:32 old-k8s-version-591842 kubelet[663]: E0315 07:51:32.647274     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 15 07:51:36 old-k8s-version-591842 kubelet[663]: I0315 07:51:36.644025     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c8441ee92457c869613120768699a33ceb84581653db7649198ccb9e71d8c473
	Mar 15 07:51:36 old-k8s-version-591842 kubelet[663]: E0315 07:51:36.644366     663 pod_workers.go:191] Error syncing pod 4f033064-ca42-40d1-8805-5d3ff76c1fa1 ("dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-2wnk8_kubernetes-dashboard(4f033064-ca42-40d1-8805-5d3ff76c1fa1)"
	Mar 15 07:51:43 old-k8s-version-591842 kubelet[663]: E0315 07:51:43.662949     663 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Mar 15 07:51:43 old-k8s-version-591842 kubelet[663]: E0315 07:51:43.663022     663 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Mar 15 07:51:43 old-k8s-version-591842 kubelet[663]: E0315 07:51:43.663253     663 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-h2gnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-9j72g_kube-system(84487c4
f-2bb4-4d4f-9257-aaf715457d3f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Mar 15 07:51:43 old-k8s-version-591842 kubelet[663]: E0315 07:51:43.663300     663 pod_workers.go:191] Error syncing pod 84487c4f-2bb4-4d4f-9257-aaf715457d3f ("metrics-server-9975d5f86-9j72g_kube-system(84487c4f-2bb4-4d4f-9257-aaf715457d3f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> kubernetes-dashboard [141c1019fad2f75625f7533bb7db48335b3215e4c610540fdc2307cccb03f95c] <==
	2024/03/15 07:46:23 Using namespace: kubernetes-dashboard
	2024/03/15 07:46:23 Using in-cluster config to connect to apiserver
	2024/03/15 07:46:23 Using secret token for csrf signing
	2024/03/15 07:46:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/03/15 07:46:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/03/15 07:46:23 Successful initial request to the apiserver, version: v1.20.0
	2024/03/15 07:46:23 Generating JWE encryption key
	2024/03/15 07:46:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/03/15 07:46:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/03/15 07:46:23 Initializing JWE encryption key from synchronized object
	2024/03/15 07:46:23 Creating in-cluster Sidecar client
	2024/03/15 07:46:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/15 07:46:23 Serving insecurely on HTTP port: 9090
	2024/03/15 07:46:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/15 07:47:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/15 07:47:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/15 07:48:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/15 07:48:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/15 07:49:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/15 07:49:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/15 07:50:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/15 07:50:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/15 07:51:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/15 07:46:23 Starting overwatch
	
	
	==> storage-provisioner [4518d148ada79ac861923312551230f9f3718d390d5b3600bb80715fde6119d2] <==
	I0315 07:46:00.673678       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0315 07:46:30.676524       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ae03a9797792da6de45dd59f24e27789b8b12426efd48a8a33efb4524e861803] <==
	I0315 07:46:41.767740       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0315 07:46:41.787987       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0315 07:46:41.788041       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0315 07:46:59.261313       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0315 07:46:59.261642       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-591842_087faa41-69b4-493d-a540-51db6d805520!
	I0315 07:46:59.262534       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"15d8587f-a922-4470-844a-e6979cddac33", APIVersion:"v1", ResourceVersion:"879", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-591842_087faa41-69b4-493d-a540-51db6d805520 became leader
	I0315 07:46:59.362183       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-591842_087faa41-69b4-493d-a540-51db6d805520!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-591842 -n old-k8s-version-591842
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-591842 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-9j72g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-591842 describe pod metrics-server-9975d5f86-9j72g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-591842 describe pod metrics-server-9975d5f86-9j72g: exit status 1 (163.379105ms)

                                                
                                                
** stderr ** 
	E0315 07:51:45.573544 3503809 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0315 07:51:45.578712 3503809 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0315 07:51:45.580952 3503809 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0315 07:51:45.584211 3503809 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0315 07:51:45.593347 3503809 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0315 07:51:45.608210 3503809 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	Error from server (NotFound): pods "metrics-server-9975d5f86-9j72g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-591842 describe pod metrics-server-9975d5f86-9j72g: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (373.89s)

                                                
                                    

Test pass (297/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.31
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.3
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.52
12 TestDownloadOnly/v1.28.4/json-events 6.34
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.09
18 TestDownloadOnly/v1.28.4/DeleteAll 0.21
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.16
21 TestDownloadOnly/v1.29.0-rc.2/json-events 6.38
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.21
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.57
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.1
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 118.88
38 TestAddons/parallel/Registry 14.48
40 TestAddons/parallel/InspektorGadget 10.91
41 TestAddons/parallel/MetricsServer 5.81
44 TestAddons/parallel/CSI 74.81
45 TestAddons/parallel/Headlamp 12.53
46 TestAddons/parallel/CloudSpanner 6.59
47 TestAddons/parallel/LocalPath 51.32
48 TestAddons/parallel/NvidiaDevicePlugin 5.57
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.17
53 TestAddons/StoppedEnableDisable 12.26
54 TestCertOptions 37.53
55 TestCertExpiration 230.86
57 TestForceSystemdFlag 46.11
58 TestForceSystemdEnv 42.23
59 TestDockerEnvContainerd 48.31
64 TestErrorSpam/setup 28.8
65 TestErrorSpam/start 0.73
66 TestErrorSpam/status 1.04
67 TestErrorSpam/pause 1.65
68 TestErrorSpam/unpause 1.83
69 TestErrorSpam/stop 1.44
72 TestFunctional/serial/CopySyncFile 0.01
73 TestFunctional/serial/StartWithProxy 60.04
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 6.02
76 TestFunctional/serial/KubeContext 0.06
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 4.03
81 TestFunctional/serial/CacheCmd/cache/add_local 1.52
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.09
86 TestFunctional/serial/CacheCmd/cache/delete 0.15
87 TestFunctional/serial/MinikubeKubectlCmd 0.15
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.17
89 TestFunctional/serial/ExtraConfig 45.86
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 1.66
92 TestFunctional/serial/LogsFileCmd 1.66
93 TestFunctional/serial/InvalidService 4.36
95 TestFunctional/parallel/ConfigCmd 0.58
96 TestFunctional/parallel/DashboardCmd 12.78
97 TestFunctional/parallel/DryRun 0.44
98 TestFunctional/parallel/InternationalLanguage 0.24
99 TestFunctional/parallel/StatusCmd 1.35
103 TestFunctional/parallel/ServiceCmdConnect 9.74
104 TestFunctional/parallel/AddonsCmd 0.19
105 TestFunctional/parallel/PersistentVolumeClaim 25.81
107 TestFunctional/parallel/SSHCmd 0.81
108 TestFunctional/parallel/CpCmd 2.09
110 TestFunctional/parallel/FileSync 0.33
111 TestFunctional/parallel/CertSync 2.14
115 TestFunctional/parallel/NodeLabels 0.11
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
119 TestFunctional/parallel/License 0.2
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.32
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 7.27
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.57
133 TestFunctional/parallel/ServiceCmd/List 0.6
134 TestFunctional/parallel/ProfileCmd/profile_list 0.52
135 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
138 TestFunctional/parallel/MountCmd/any-port 7.83
139 TestFunctional/parallel/ServiceCmd/Format 0.46
140 TestFunctional/parallel/ServiceCmd/URL 0.49
141 TestFunctional/parallel/MountCmd/specific-port 2.29
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.85
143 TestFunctional/parallel/Version/short 0.07
144 TestFunctional/parallel/Version/components 1.4
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.66
150 TestFunctional/parallel/ImageCommands/Setup 1.56
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.65
161 TestFunctional/delete_addon-resizer_images 0.08
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestMultiControlPlane/serial/StartCluster 128.22
168 TestMultiControlPlane/serial/DeployApp 6.18
169 TestMultiControlPlane/serial/PingHostFromPods 1.75
170 TestMultiControlPlane/serial/AddWorkerNode 23.02
171 TestMultiControlPlane/serial/NodeLabels 0.12
172 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
173 TestMultiControlPlane/serial/CopyFile 19.91
174 TestMultiControlPlane/serial/StopSecondaryNode 12.93
175 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.58
176 TestMultiControlPlane/serial/RestartSecondaryNode 18.36
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.77
178 TestMultiControlPlane/serial/RestartClusterKeepsNodes 135.68
179 TestMultiControlPlane/serial/DeleteSecondaryNode 11.62
180 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.57
181 TestMultiControlPlane/serial/StopCluster 36.27
182 TestMultiControlPlane/serial/RestartCluster 67.94
183 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.55
184 TestMultiControlPlane/serial/AddSecondaryNode 40.12
185 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
189 TestJSONOutput/start/Command 59.67
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.75
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.66
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.78
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.23
214 TestKicCustomNetwork/create_custom_network 39.5
215 TestKicCustomNetwork/use_default_bridge_network 36.48
216 TestKicExistingNetwork 34.64
217 TestKicCustomSubnet 34.02
218 TestKicStaticIP 35.76
219 TestMainNoArgs 0.09
220 TestMinikubeProfile 69.55
223 TestMountStart/serial/StartWithMountFirst 8.81
224 TestMountStart/serial/VerifyMountFirst 0.27
225 TestMountStart/serial/StartWithMountSecond 7.12
226 TestMountStart/serial/VerifyMountSecond 0.29
227 TestMountStart/serial/DeleteFirst 1.62
228 TestMountStart/serial/VerifyMountPostDelete 0.27
229 TestMountStart/serial/Stop 1.21
230 TestMountStart/serial/RestartStopped 7.26
231 TestMountStart/serial/VerifyMountPostStop 0.3
234 TestMultiNode/serial/FreshStart2Nodes 77.35
235 TestMultiNode/serial/DeployApp2Nodes 10.33
236 TestMultiNode/serial/PingHostFrom2Pods 1.05
237 TestMultiNode/serial/AddNode 17.04
238 TestMultiNode/serial/MultiNodeLabels 0.08
239 TestMultiNode/serial/ProfileList 0.34
240 TestMultiNode/serial/CopyFile 10.47
241 TestMultiNode/serial/StopNode 2.31
242 TestMultiNode/serial/StartAfterStop 9.67
243 TestMultiNode/serial/RestartKeepsNodes 86.23
244 TestMultiNode/serial/DeleteNode 5.5
245 TestMultiNode/serial/StopMultiNode 24.1
246 TestMultiNode/serial/RestartMultiNode 49.89
247 TestMultiNode/serial/ValidateNameConflict 37.82
252 TestPreload 110.36
254 TestScheduledStopUnix 112.91
257 TestInsufficientStorage 10.64
258 TestRunningBinaryUpgrade 98.35
260 TestKubernetesUpgrade 196.14
261 TestMissingContainerUpgrade 160.31
263 TestPause/serial/Start 68.35
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
266 TestNoKubernetes/serial/StartWithK8s 42.28
267 TestNoKubernetes/serial/StartWithStopK8s 16.03
268 TestNoKubernetes/serial/Start 8.03
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
270 TestNoKubernetes/serial/ProfileList 1.07
271 TestNoKubernetes/serial/Stop 1.3
272 TestPause/serial/SecondStartNoReconfiguration 6.38
273 TestNoKubernetes/serial/StartNoArgs 7.56
274 TestPause/serial/Pause 0.93
275 TestPause/serial/VerifyStatus 0.37
276 TestPause/serial/Unpause 0.83
277 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.48
278 TestPause/serial/PauseAgain 1.25
279 TestPause/serial/DeletePaused 2.92
280 TestPause/serial/VerifyDeletedResources 0.19
281 TestStoppedBinaryUpgrade/Setup 0.72
282 TestStoppedBinaryUpgrade/Upgrade 90.91
283 TestStoppedBinaryUpgrade/MinikubeLogs 1.21
298 TestNetworkPlugins/group/false 6.54
303 TestStartStop/group/old-k8s-version/serial/FirstStart 174.55
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 59.08
306 TestStartStop/group/old-k8s-version/serial/DeployApp 9.83
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.55
308 TestStartStop/group/old-k8s-version/serial/Stop 12.46
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
311 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.51
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.3
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.17
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.29
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.65
316 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.14
318 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
319 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.19
321 TestStartStop/group/embed-certs/serial/FirstStart 63.77
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.32
325 TestStartStop/group/old-k8s-version/serial/Pause 3.45
327 TestStartStop/group/no-preload/serial/FirstStart 75.32
328 TestStartStop/group/embed-certs/serial/DeployApp 8.48
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.58
330 TestStartStop/group/embed-certs/serial/Stop 12.44
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
332 TestStartStop/group/embed-certs/serial/SecondStart 270.11
333 TestStartStop/group/no-preload/serial/DeployApp 8.35
334 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
335 TestStartStop/group/no-preload/serial/Stop 12.18
336 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
337 TestStartStop/group/no-preload/serial/SecondStart 269.43
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.12
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
341 TestStartStop/group/embed-certs/serial/Pause 3.17
343 TestStartStop/group/newest-cni/serial/FirstStart 43.97
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.61
346 TestStartStop/group/newest-cni/serial/Stop 1.28
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
348 TestStartStop/group/newest-cni/serial/SecondStart 16.72
349 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
350 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.19
351 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.34
352 TestStartStop/group/no-preload/serial/Pause 4.69
353 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
354 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
356 TestStartStop/group/newest-cni/serial/Pause 4.67
357 TestNetworkPlugins/group/auto/Start 71.55
358 TestNetworkPlugins/group/kindnet/Start 69.95
359 TestNetworkPlugins/group/auto/KubeletFlags 0.34
360 TestNetworkPlugins/group/auto/NetCatPod 8.39
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
363 TestNetworkPlugins/group/kindnet/NetCatPod 9.29
364 TestNetworkPlugins/group/auto/DNS 0.23
365 TestNetworkPlugins/group/auto/Localhost 0.22
366 TestNetworkPlugins/group/auto/HairPin 0.23
367 TestNetworkPlugins/group/kindnet/DNS 0.27
368 TestNetworkPlugins/group/kindnet/Localhost 0.22
369 TestNetworkPlugins/group/kindnet/HairPin 0.47
370 TestNetworkPlugins/group/calico/Start 84.38
371 TestNetworkPlugins/group/custom-flannel/Start 61.43
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.28
374 TestNetworkPlugins/group/custom-flannel/DNS 0.29
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
377 TestNetworkPlugins/group/calico/ControllerPod 6.02
378 TestNetworkPlugins/group/calico/KubeletFlags 0.43
379 TestNetworkPlugins/group/calico/NetCatPod 11.39
380 TestNetworkPlugins/group/calico/DNS 0.38
381 TestNetworkPlugins/group/calico/Localhost 0.24
382 TestNetworkPlugins/group/calico/HairPin 0.22
383 TestNetworkPlugins/group/enable-default-cni/Start 91.78
384 TestNetworkPlugins/group/flannel/Start 63.11
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.3
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
389 TestNetworkPlugins/group/flannel/NetCatPod 10.27
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
393 TestNetworkPlugins/group/flannel/DNS 0.2
394 TestNetworkPlugins/group/flannel/Localhost 0.19
395 TestNetworkPlugins/group/flannel/HairPin 0.16
396 TestNetworkPlugins/group/bridge/Start 83.29
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
398 TestNetworkPlugins/group/bridge/NetCatPod 9.27
399 TestNetworkPlugins/group/bridge/DNS 0.21
400 TestNetworkPlugins/group/bridge/Localhost 0.15
401 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (6.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-386280 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-386280 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.310146501s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-386280
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-386280: exit status 85 (87.084613ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-386280 | jenkins | v1.32.0 | 15 Mar 24 07:00 UTC |          |
	|         | -p download-only-386280        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 07:00:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 07:00:53.452623 3300555 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:00:53.452831 3300555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:00:53.452862 3300555 out.go:304] Setting ErrFile to fd 2...
	I0315 07:00:53.452883 3300555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:00:53.453159 3300555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
	W0315 07:00:53.453324 3300555 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18213-3295134/.minikube/config/config.json: open /home/jenkins/minikube-integration/18213-3295134/.minikube/config/config.json: no such file or directory
	I0315 07:00:53.453756 3300555 out.go:298] Setting JSON to true
	I0315 07:00:53.454684 3300555 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":56598,"bootTime":1710429456,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0315 07:00:53.454780 3300555 start.go:139] virtualization:  
	I0315 07:00:53.458642 3300555 out.go:97] [download-only-386280] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	W0315 07:00:53.458850 3300555 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball: no such file or directory
	I0315 07:00:53.458915 3300555 notify.go:220] Checking for updates...
	I0315 07:00:53.462057 3300555 out.go:169] MINIKUBE_LOCATION=18213
	I0315 07:00:53.464697 3300555 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:00:53.466995 3300555 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	I0315 07:00:53.469399 3300555 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	I0315 07:00:53.471736 3300555 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0315 07:00:53.476197 3300555 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0315 07:00:53.476457 3300555 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:00:53.498018 3300555 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0315 07:00:53.498127 3300555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:00:53.567138 3300555 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-15 07:00:53.557712806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:00:53.567253 3300555 docker.go:295] overlay module found
	I0315 07:00:53.569855 3300555 out.go:97] Using the docker driver based on user configuration
	I0315 07:00:53.569890 3300555 start.go:297] selected driver: docker
	I0315 07:00:53.569898 3300555 start.go:901] validating driver "docker" against <nil>
	I0315 07:00:53.570004 3300555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:00:53.627626 3300555 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-15 07:00:53.618285743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:00:53.627802 3300555 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 07:00:53.628088 3300555 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0315 07:00:53.628244 3300555 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0315 07:00:53.630864 3300555 out.go:169] Using Docker driver with root privileges
	I0315 07:00:53.632783 3300555 cni.go:84] Creating CNI manager for ""
	I0315 07:00:53.632814 3300555 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0315 07:00:53.632824 3300555 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0315 07:00:53.632914 3300555 start.go:340] cluster config:
	{Name:download-only-386280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-386280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:00:53.635120 3300555 out.go:97] Starting "download-only-386280" primary control-plane node in "download-only-386280" cluster
	I0315 07:00:53.635144 3300555 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0315 07:00:53.637265 3300555 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0315 07:00:53.637302 3300555 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0315 07:00:53.637494 3300555 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0315 07:00:53.652179 3300555 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0315 07:00:53.652380 3300555 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0315 07:00:53.652477 3300555 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0315 07:00:53.701350 3300555 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0315 07:00:53.701375 3300555 cache.go:56] Caching tarball of preloaded images
	I0315 07:00:53.701886 3300555 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0315 07:00:53.704947 3300555 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0315 07:00:53.704972 3300555 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0315 07:00:53.813059 3300555 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0315 07:00:57.492607 3300555 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0315 07:00:57.492701 3300555 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-386280 host does not exist
	  To start a cluster, run: "minikube start -p download-only-386280"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-386280
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (6.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-348072 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-348072 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.34203042s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (6.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-348072
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-348072: exit status 85 (90.831939ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-386280 | jenkins | v1.32.0 | 15 Mar 24 07:00 UTC |                     |
	|         | -p download-only-386280        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 15 Mar 24 07:00 UTC | 15 Mar 24 07:01 UTC |
	| delete  | -p download-only-386280        | download-only-386280 | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC | 15 Mar 24 07:01 UTC |
	| start   | -o=json --download-only        | download-only-348072 | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC |                     |
	|         | -p download-only-348072        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 07:01:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 07:01:00.678979 3300718 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:01:00.679143 3300718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:01:00.679149 3300718 out.go:304] Setting ErrFile to fd 2...
	I0315 07:01:00.679154 3300718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:01:00.679413 3300718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
	I0315 07:01:00.679841 3300718 out.go:298] Setting JSON to true
	I0315 07:01:00.680802 3300718 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":56605,"bootTime":1710429456,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0315 07:01:00.680882 3300718 start.go:139] virtualization:  
	I0315 07:01:00.683419 3300718 out.go:97] [download-only-348072] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0315 07:01:00.685927 3300718 out.go:169] MINIKUBE_LOCATION=18213
	I0315 07:01:00.683898 3300718 notify.go:220] Checking for updates...
	I0315 07:01:00.690138 3300718 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:01:00.692473 3300718 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	I0315 07:01:00.694177 3300718 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	I0315 07:01:00.696488 3300718 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0315 07:01:00.700925 3300718 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0315 07:01:00.701257 3300718 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:01:00.723604 3300718 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0315 07:01:00.723714 3300718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:01:00.798128 3300718 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:50 SystemTime:2024-03-15 07:01:00.788268958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:01:00.798252 3300718 docker.go:295] overlay module found
	I0315 07:01:00.800154 3300718 out.go:97] Using the docker driver based on user configuration
	I0315 07:01:00.800181 3300718 start.go:297] selected driver: docker
	I0315 07:01:00.800187 3300718 start.go:901] validating driver "docker" against <nil>
	I0315 07:01:00.800363 3300718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:01:00.858465 3300718 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:50 SystemTime:2024-03-15 07:01:00.849373583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:01:00.858651 3300718 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 07:01:00.858968 3300718 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0315 07:01:00.859190 3300718 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0315 07:01:00.861390 3300718 out.go:169] Using Docker driver with root privileges
	I0315 07:01:00.863529 3300718 cni.go:84] Creating CNI manager for ""
	I0315 07:01:00.863547 3300718 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0315 07:01:00.863561 3300718 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0315 07:01:00.863639 3300718 start.go:340] cluster config:
	{Name:download-only-348072 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-348072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:01:00.866010 3300718 out.go:97] Starting "download-only-348072" primary control-plane node in "download-only-348072" cluster
	I0315 07:01:00.866038 3300718 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0315 07:01:00.867882 3300718 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0315 07:01:00.867908 3300718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0315 07:01:00.868087 3300718 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0315 07:01:00.882916 3300718 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0315 07:01:00.883054 3300718 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0315 07:01:00.883123 3300718 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0315 07:01:00.883136 3300718 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0315 07:01:00.883152 3300718 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0315 07:01:00.945343 3300718 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0315 07:01:00.945368 3300718 cache.go:56] Caching tarball of preloaded images
	I0315 07:01:00.945945 3300718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0315 07:01:00.948500 3300718 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0315 07:01:00.948537 3300718 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0315 07:01:01.031233 3300718 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4?checksum=md5:cc2d75db20c4d651f0460755d6df7b03 -> /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0315 07:01:05.427201 3300718 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0315 07:01:05.427318 3300718 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-348072 host does not exist
	  To start a cluster, run: "minikube start -p download-only-348072"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-348072
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (6.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-815376 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-815376 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.382961529s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (6.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-815376
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-815376: exit status 85 (82.51713ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-386280 | jenkins | v1.32.0 | 15 Mar 24 07:00 UTC |                     |
	|         | -p download-only-386280           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Mar 24 07:00 UTC | 15 Mar 24 07:01 UTC |
	| delete  | -p download-only-386280           | download-only-386280 | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC | 15 Mar 24 07:01 UTC |
	| start   | -o=json --download-only           | download-only-348072 | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC |                     |
	|         | -p download-only-348072           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC | 15 Mar 24 07:01 UTC |
	| delete  | -p download-only-348072           | download-only-348072 | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC | 15 Mar 24 07:01 UTC |
	| start   | -o=json --download-only           | download-only-815376 | jenkins | v1.32.0 | 15 Mar 24 07:01 UTC |                     |
	|         | -p download-only-815376           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 07:01:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 07:01:07.470915 3300882 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:01:07.471110 3300882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:01:07.471121 3300882 out.go:304] Setting ErrFile to fd 2...
	I0315 07:01:07.471127 3300882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:01:07.471372 3300882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
	I0315 07:01:07.471761 3300882 out.go:298] Setting JSON to true
	I0315 07:01:07.472659 3300882 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":56612,"bootTime":1710429456,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0315 07:01:07.472742 3300882 start.go:139] virtualization:  
	I0315 07:01:07.475406 3300882 out.go:97] [download-only-815376] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0315 07:01:07.477658 3300882 out.go:169] MINIKUBE_LOCATION=18213
	I0315 07:01:07.475597 3300882 notify.go:220] Checking for updates...
	I0315 07:01:07.482343 3300882 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:01:07.484817 3300882 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	I0315 07:01:07.486917 3300882 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	I0315 07:01:07.489238 3300882 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0315 07:01:07.493451 3300882 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0315 07:01:07.493729 3300882 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:01:07.516008 3300882 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0315 07:01:07.516120 3300882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:01:07.583522 3300882 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-15 07:01:07.573619992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:01:07.583636 3300882 docker.go:295] overlay module found
	I0315 07:01:07.586201 3300882 out.go:97] Using the docker driver based on user configuration
	I0315 07:01:07.586232 3300882 start.go:297] selected driver: docker
	I0315 07:01:07.586239 3300882 start.go:901] validating driver "docker" against <nil>
	I0315 07:01:07.586340 3300882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:01:07.637472 3300882 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-15 07:01:07.628694563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:01:07.637636 3300882 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 07:01:07.637917 3300882 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0315 07:01:07.638079 3300882 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0315 07:01:07.640364 3300882 out.go:169] Using Docker driver with root privileges
	I0315 07:01:07.642472 3300882 cni.go:84] Creating CNI manager for ""
	I0315 07:01:07.642499 3300882 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0315 07:01:07.642509 3300882 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0315 07:01:07.642586 3300882 start.go:340] cluster config:
	{Name:download-only-815376 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-815376 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0315 07:01:07.644827 3300882 out.go:97] Starting "download-only-815376" primary control-plane node in "download-only-815376" cluster
	I0315 07:01:07.644848 3300882 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0315 07:01:07.646838 3300882 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0315 07:01:07.646860 3300882 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0315 07:01:07.646931 3300882 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0315 07:01:07.664245 3300882 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0315 07:01:07.664384 3300882 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0315 07:01:07.664408 3300882 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0315 07:01:07.664413 3300882 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0315 07:01:07.664421 3300882 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0315 07:01:07.731459 3300882 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0315 07:01:07.731498 3300882 cache.go:56] Caching tarball of preloaded images
	I0315 07:01:07.731693 3300882 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0315 07:01:07.734062 3300882 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0315 07:01:07.734102 3300882 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0315 07:01:07.816066 3300882 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:adc883bf092a67b4673b5b5787f99b2f -> /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0315 07:01:12.300637 3300882 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0315 07:01:12.300742 3300882 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18213-3295134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-815376 host does not exist
	  To start a cluster, run: "minikube start -p download-only-815376"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-815376
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-391586 --alsologtostderr --binary-mirror http://127.0.0.1:34147 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-391586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-391586
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-639618
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-639618: exit status 85 (98.969538ms)

                                                
                                                
-- stdout --
	* Profile "addons-639618" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-639618"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-639618
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-639618: exit status 85 (88.582129ms)

                                                
                                                
-- stdout --
	* Profile "addons-639618" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-639618"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (118.88s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-639618 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-639618 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (1m58.874348698s)
--- PASS: TestAddons/Setup (118.88s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 44.009402ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-j6pq4" [20915188-e06b-4cee-8a92-daac71a39bdc] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005750745s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-q6qc2" [d1ac4493-7e3b-45fb-be8d-76cde45f44bb] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004606059s
addons_test.go:340: (dbg) Run:  kubectl --context addons-639618 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-639618 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-639618 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.301113872s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-639618 ip
2024/03/15 07:03:28 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-639618 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.48s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bz2fk" [5936c41d-b23d-48fd-b977-030c3617bc46] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005063514s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-639618
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-639618: (5.899112531s)
--- PASS: TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 4.465906ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-wzppg" [abbd5285-7d78-4282-a24a-889b0049d7bf] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005398118s
addons_test.go:415: (dbg) Run:  kubectl --context addons-639618 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-639618 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (74.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 44.232371ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-639618 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-639618 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [946671a9-845b-42f0-ae0b-132c0f89380e] Pending
helpers_test.go:344: "task-pv-pod" [946671a9-845b-42f0-ae0b-132c0f89380e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [946671a9-845b-42f0-ae0b-132c0f89380e] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003853696s
addons_test.go:584: (dbg) Run:  kubectl --context addons-639618 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-639618 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-639618 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-639618 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-639618 delete pod task-pv-pod: (1.656816784s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-639618 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-639618 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-639618 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [165039ac-f0d0-4709-94ce-5caa84131a62] Pending
helpers_test.go:344: "task-pv-pod-restore" [165039ac-f0d0-4709-94ce-5caa84131a62] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [165039ac-f0d0-4709-94ce-5caa84131a62] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003646513s
addons_test.go:626: (dbg) Run:  kubectl --context addons-639618 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-639618 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-639618 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-639618 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-639618 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.954059952s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-639618 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (74.81s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-639618 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-639618 --alsologtostderr -v=1: (1.526050625s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-ghdl4" [2d4c188a-41ca-401e-8263-13be6bde5aaa] Pending
helpers_test.go:344: "headlamp-5485c556b-ghdl4" [2d4c188a-41ca-401e-8263-13be6bde5aaa] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-ghdl4" [2d4c188a-41ca-401e-8263-13be6bde5aaa] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003749509s
--- PASS: TestAddons/parallel/Headlamp (12.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-l95pc" [efcfc10a-f0d6-45b4-90f3-5ac6ca1a76c8] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003631956s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-639618
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.32s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-639618 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-639618 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-639618 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f481a7c7-9cc1-470b-be6f-0071ee5bb7ed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f481a7c7-9cc1-470b-be6f-0071ee5bb7ed] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f481a7c7-9cc1-470b-be6f-0071ee5bb7ed] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003892356s
addons_test.go:891: (dbg) Run:  kubectl --context addons-639618 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-639618 ssh "cat /opt/local-path-provisioner/pvc-3dd5b1bc-194d-4b53-a0d1-0db7667fbc49_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-639618 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-639618 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-639618 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-639618 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.15998104s)
--- PASS: TestAddons/parallel/LocalPath (51.32s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kmtmz" [c36f2e50-18db-430e-802c-18aea031ca4a] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004480282s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-639618
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-vpjlb" [b9dcc2b2-17cc-4a32-b981-1cd5757918e6] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004547848s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-639618 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-639618 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-639618
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-639618: (11.956814626s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-639618
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-639618
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-639618
--- PASS: TestAddons/StoppedEnableDisable (12.26s)

                                                
                                    
x
+
TestCertOptions (37.53s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-342304 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-342304 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.83883307s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-342304 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-342304 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-342304 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-342304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-342304
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-342304: (2.018262322s)
--- PASS: TestCertOptions (37.53s)

                                                
                                    
x
+
TestCertExpiration (230.86s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-764294 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0315 07:41:17.879177 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-764294 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.826959614s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-764294 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-764294 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.467778708s)
helpers_test.go:175: Cleaning up "cert-expiration-764294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-764294
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-764294: (2.567423031s)
--- PASS: TestCertExpiration (230.86s)

                                                
                                    
x
+
TestForceSystemdFlag (46.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-509862 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-509862 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.630627473s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-509862 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-509862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-509862
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-509862: (2.102998247s)
--- PASS: TestForceSystemdFlag (46.11s)

                                                
                                    
x
+
TestForceSystemdEnv (42.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-770281 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-770281 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.532700124s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-770281 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-770281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-770281
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-770281: (2.347018444s)
--- PASS: TestForceSystemdEnv (42.23s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.31s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-704907 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-704907 --driver=docker  --container-runtime=containerd: (31.971020279s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-704907"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-704907": (1.217666737s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dslAnRLIZCTG/agent.3317661" SSH_AGENT_PID="3317662" DOCKER_HOST=ssh://docker@127.0.0.1:36685 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dslAnRLIZCTG/agent.3317661" SSH_AGENT_PID="3317662" DOCKER_HOST=ssh://docker@127.0.0.1:36685 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dslAnRLIZCTG/agent.3317661" SSH_AGENT_PID="3317662" DOCKER_HOST=ssh://docker@127.0.0.1:36685 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.681231917s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dslAnRLIZCTG/agent.3317661" SSH_AGENT_PID="3317662" DOCKER_HOST=ssh://docker@127.0.0.1:36685 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-704907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-704907
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-704907: (2.044667001s)
--- PASS: TestDockerEnvContainerd (48.31s)

                                                
                                    
x
+
TestErrorSpam/setup (28.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-809873 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-809873 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-809873 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-809873 --driver=docker  --container-runtime=containerd: (28.800425033s)
--- PASS: TestErrorSpam/setup (28.80s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1.04s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 status
--- PASS: TestErrorSpam/status (1.04s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 stop: (1.229508325s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-809873 --log_dir /tmp/nospam-809873 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18213-3295134/.minikube/files/etc/test/nested/copy/3300550/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (60.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-757678 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0315 07:08:14.830195 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
E0315 07:08:14.836236 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
E0315 07:08:14.846492 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
E0315 07:08:14.866785 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
E0315 07:08:14.907103 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
E0315 07:08:14.987433 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
E0315 07:08:15.147946 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
E0315 07:08:15.468250 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
E0315 07:08:16.108850 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
E0315 07:08:17.389067 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
E0315 07:08:19.949225 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
E0315 07:08:25.070046 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-757678 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m0.03974581s)
--- PASS: TestFunctional/serial/StartWithProxy (60.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.02s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-757678 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-757678 --alsologtostderr -v=8: (6.011762074s)
functional_test.go:659: soft start took 6.015351701s for "functional-757678" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.02s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-757678 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-757678 cache add registry.k8s.io/pause:3.1: (1.449565097s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-757678 cache add registry.k8s.io/pause:3.3: (1.323379002s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 cache add registry.k8s.io/pause:latest
E0315 07:08:35.310664 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-757678 cache add registry.k8s.io/pause:latest: (1.253752235s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-757678 /tmp/TestFunctionalserialCacheCmdcacheadd_local3149678996/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 cache add minikube-local-cache-test:functional-757678
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 cache delete minikube-local-cache-test:functional-757678
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-757678
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-757678 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (313.735108ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-757678 cache reload: (1.153617529s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 kubectl -- --context functional-757678 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-757678 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-757678 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0315 07:08:55.791291 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-757678 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.860239295s)
functional_test.go:757: restart took 45.86034774s for "functional-757678" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (45.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-757678 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-757678 logs: (1.656126855s)
--- PASS: TestFunctional/serial/LogsCmd (1.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 logs --file /tmp/TestFunctionalserialLogsFileCmd2875331138/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-757678 logs --file /tmp/TestFunctionalserialLogsFileCmd2875331138/001/logs.txt: (1.65617551s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-757678 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-757678
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-757678: exit status 115 (601.886314ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32746 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-757678 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-757678 config get cpus: exit status 14 (90.973033ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-757678 config get cpus: exit status 14 (94.365857ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-757678 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-757678 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3331517: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.78s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-757678 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-757678 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (192.358953ms)

                                                
                                                
-- stdout --
	* [functional-757678] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 07:10:05.102549 3331242 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:10:05.102823 3331242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:10:05.102857 3331242 out.go:304] Setting ErrFile to fd 2...
	I0315 07:10:05.102879 3331242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:10:05.103205 3331242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
	I0315 07:10:05.103641 3331242 out.go:298] Setting JSON to false
	I0315 07:10:05.104816 3331242 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":57149,"bootTime":1710429456,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0315 07:10:05.104931 3331242 start.go:139] virtualization:  
	I0315 07:10:05.107669 3331242 out.go:177] * [functional-757678] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0315 07:10:05.109998 3331242 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:10:05.110098 3331242 notify.go:220] Checking for updates...
	I0315 07:10:05.112230 3331242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:10:05.114721 3331242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	I0315 07:10:05.116565 3331242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	I0315 07:10:05.119300 3331242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0315 07:10:05.122252 3331242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:10:05.125796 3331242 config.go:182] Loaded profile config "functional-757678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0315 07:10:05.126393 3331242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:10:05.159533 3331242 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0315 07:10:05.159642 3331242 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:10:05.223671 3331242 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-15 07:10:05.213936274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:10:05.223795 3331242 docker.go:295] overlay module found
	I0315 07:10:05.226349 3331242 out.go:177] * Using the docker driver based on existing profile
	I0315 07:10:05.228968 3331242 start.go:297] selected driver: docker
	I0315 07:10:05.228991 3331242 start.go:901] validating driver "docker" against &{Name:functional-757678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-757678 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:10:05.229110 3331242 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:10:05.231658 3331242 out.go:177] 
	W0315 07:10:05.233767 3331242 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0315 07:10:05.235798 3331242 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-757678 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-757678 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-757678 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (236.105757ms)

                                                
                                                
-- stdout --
	* [functional-757678] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 07:10:04.896401 3331140 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:10:04.896601 3331140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:10:04.896607 3331140 out.go:304] Setting ErrFile to fd 2...
	I0315 07:10:04.896611 3331140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:10:04.897385 3331140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
	I0315 07:10:04.897776 3331140 out.go:298] Setting JSON to false
	I0315 07:10:04.898771 3331140 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":57149,"bootTime":1710429456,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0315 07:10:04.898848 3331140 start.go:139] virtualization:  
	I0315 07:10:04.902663 3331140 out.go:177] * [functional-757678] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0315 07:10:04.905350 3331140 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:10:04.907830 3331140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:10:04.905554 3331140 notify.go:220] Checking for updates...
	I0315 07:10:04.913069 3331140 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	I0315 07:10:04.915451 3331140 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	I0315 07:10:04.917423 3331140 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0315 07:10:04.919716 3331140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:10:04.922548 3331140 config.go:182] Loaded profile config "functional-757678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0315 07:10:04.923034 3331140 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:10:04.956553 3331140 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0315 07:10:04.956667 3331140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:10:05.031022 3331140 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-15 07:10:05.019938262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:10:05.031170 3331140 docker.go:295] overlay module found
	I0315 07:10:05.033801 3331140 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0315 07:10:05.035654 3331140 start.go:297] selected driver: docker
	I0315 07:10:05.035679 3331140 start.go:901] validating driver "docker" against &{Name:functional-757678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-757678 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 07:10:05.035892 3331140 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:10:05.038410 3331140 out.go:177] 
	W0315 07:10:05.040389 3331140 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0315 07:10:05.042381 3331140 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-757678 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-757678 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-bw7j7" [03c3ee56-ae10-4df4-99ce-08941ed160f6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-bw7j7" [03c3ee56-ae10-4df4-99ce-08941ed160f6] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.00377316s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32304
functional_test.go:1671: http://192.168.49.2:32304: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-bw7j7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32304
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e7d13eb3-d6a8-40c9-bfdf-0a309780adb1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004557497s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-757678 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-757678 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-757678 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-757678 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e03a2ab6-cc3a-4610-a364-0e630f586499] Pending
helpers_test.go:344: "sp-pod" [e03a2ab6-cc3a-4610-a364-0e630f586499] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e03a2ab6-cc3a-4610-a364-0e630f586499] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004334953s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-757678 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-757678 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-757678 delete -f testdata/storage-provisioner/pod.yaml: (1.56037201s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-757678 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6689c020-f1f6-48c5-8a97-98a66412e962] Pending
helpers_test.go:344: "sp-pod" [6689c020-f1f6-48c5-8a97-98a66412e962] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003930119s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-757678 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.81s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh -n functional-757678 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 cp functional-757678:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd722824706/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh -n functional-757678 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh -n functional-757678 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/3300550/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "sudo cat /etc/test/nested/copy/3300550/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/3300550.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "sudo cat /etc/ssl/certs/3300550.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/3300550.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "sudo cat /usr/share/ca-certificates/3300550.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/33005502.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "sudo cat /etc/ssl/certs/33005502.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/33005502.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "sudo cat /usr/share/ca-certificates/33005502.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-757678 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-757678 ssh "sudo systemctl is-active docker": exit status 1 (298.479087ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-757678 ssh "sudo systemctl is-active crio": exit status 1 (323.105509ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-757678 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-757678 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-757678 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3329122: os: process already finished
helpers_test.go:502: unable to terminate pid 3328994: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-757678 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-757678 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-757678 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [42bef16a-d2fc-42bf-8ba7-7718b2d91830] Pending
helpers_test.go:344: "nginx-svc" [42bef16a-d2fc-42bf-8ba7-7718b2d91830] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0315 07:09:36.752153 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
helpers_test.go:344: "nginx-svc" [42bef16a-d2fc-42bf-8ba7-7718b2d91830] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004115073s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-757678 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.124.15 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-757678 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-757678 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-757678 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-zkbv7" [96a43c91-7dd3-493d-9977-d97bb724bde1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-zkbv7" [96a43c91-7dd3-493d-9977-d97bb724bde1] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005355038s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "410.593881ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "113.95109ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 service list -o json
functional_test.go:1490: Took "631.671203ms" to run "out/minikube-linux-arm64 -p functional-757678 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "427.993827ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "78.193385ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30261
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-757678 /tmp/TestFunctionalparallelMountCmdany-port917165193/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710486602001848274" to /tmp/TestFunctionalparallelMountCmdany-port917165193/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710486602001848274" to /tmp/TestFunctionalparallelMountCmdany-port917165193/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710486602001848274" to /tmp/TestFunctionalparallelMountCmdany-port917165193/001/test-1710486602001848274
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-757678 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (493.800646ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 15 07:10 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 15 07:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 15 07:10 test-1710486602001848274
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh cat /mount-9p/test-1710486602001848274
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-757678 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [241eb449-660c-41d7-98a6-104e08e2fbc9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [241eb449-660c-41d7-98a6-104e08e2fbc9] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [241eb449-660c-41d7-98a6-104e08e2fbc9] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.005212022s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-757678 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-757678 /tmp/TestFunctionalparallelMountCmdany-port917165193/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30261
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-757678 /tmp/TestFunctionalparallelMountCmdspecific-port3696211431/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-757678 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (571.492644ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-757678 /tmp/TestFunctionalparallelMountCmdspecific-port3696211431/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-757678 ssh "sudo umount -f /mount-9p": exit status 1 (334.907769ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-757678 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-757678 /tmp/TestFunctionalparallelMountCmdspecific-port3696211431/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-757678 /tmp/TestFunctionalparallelMountCmdVerifyCleanup18043704/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-757678 /tmp/TestFunctionalparallelMountCmdVerifyCleanup18043704/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-757678 /tmp/TestFunctionalparallelMountCmdVerifyCleanup18043704/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-757678 ssh "findmnt -T" /mount1: (1.085205753s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-757678 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-757678 /tmp/TestFunctionalparallelMountCmdVerifyCleanup18043704/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-757678 /tmp/TestFunctionalparallelMountCmdVerifyCleanup18043704/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-757678 /tmp/TestFunctionalparallelMountCmdVerifyCleanup18043704/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-757678 version -o=json --components: (1.395924999s)
--- PASS: TestFunctional/parallel/Version/components (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-757678 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-757678
docker.io/kindest/kindnetd:v20240202-8f1494ea
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-757678 image ls --format short --alsologtostderr:
I0315 07:10:32.226549 3333973 out.go:291] Setting OutFile to fd 1 ...
I0315 07:10:32.226727 3333973 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 07:10:32.226733 3333973 out.go:304] Setting ErrFile to fd 2...
I0315 07:10:32.226739 3333973 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 07:10:32.226973 3333973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
I0315 07:10:32.227721 3333973 config.go:182] Loaded profile config "functional-757678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0315 07:10:32.227839 3333973 config.go:182] Loaded profile config "functional-757678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0315 07:10:32.228387 3333973 cli_runner.go:164] Run: docker container inspect functional-757678 --format={{.State.Status}}
I0315 07:10:32.250083 3333973 ssh_runner.go:195] Run: systemctl --version
I0315 07:10:32.250143 3333973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-757678
I0315 07:10:32.276549 3333973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36695 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/functional-757678/id_rsa Username:docker}
I0315 07:10:32.373338 3333973 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-757678 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:05c284 | 17.1MB |
| docker.io/library/minikube-local-cache-test | functional-757678  | sha256:b68602 | 1.01kB |
| docker.io/library/nginx                     | latest             | sha256:070027 | 67.2MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:04b4c4 | 31.6MB |
| docker.io/library/nginx                     | alpine             | sha256:be5e6f | 17.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:9961cb | 30.4MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4740c1 | 25.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:3ca3ca | 22MB   |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-757678 image ls --format table --alsologtostderr:
I0315 07:10:32.542489 3334030 out.go:291] Setting OutFile to fd 1 ...
I0315 07:10:32.542704 3334030 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 07:10:32.542741 3334030 out.go:304] Setting ErrFile to fd 2...
I0315 07:10:32.542762 3334030 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 07:10:32.543124 3334030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
I0315 07:10:32.543944 3334030 config.go:182] Loaded profile config "functional-757678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0315 07:10:32.544169 3334030 config.go:182] Loaded profile config "functional-757678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0315 07:10:32.544723 3334030 cli_runner.go:164] Run: docker container inspect functional-757678 --format={{.State.Status}}
I0315 07:10:32.561678 3334030 ssh_runner.go:195] Run: systemctl --version
I0315 07:10:32.561741 3334030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-757678
I0315 07:10:32.585056 3334030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36695 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/functional-757678/id_rsa Username:docker}
I0315 07:10:32.684772 3334030 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-757678 image ls --format json --alsologtostderr:
[{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53","repoDigests":["docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"67216851"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11
e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"31582354"},{"id":"sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"30360149"},{"id":"sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"22001357"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTag
s":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"25336339"},{"id":"sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests":["docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17601423"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba635
91858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"17082307"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisione
r@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:b686025a1606131449f2f21ea42bc2dd26b6c962d7dd7e1c525944e9ad30d0ba","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-757678"],"size":"1006"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-757678 image ls --format json --alsologtostderr:
I0315 07:10:32.518133 3334026 out.go:291] Setting OutFile to fd 1 ...
I0315 07:10:32.518326 3334026 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 07:10:32.518352 3334026 out.go:304] Setting ErrFile to fd 2...
I0315 07:10:32.518370 3334026 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 07:10:32.518643 3334026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
I0315 07:10:32.519361 3334026 config.go:182] Loaded profile config "functional-757678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0315 07:10:32.519531 3334026 config.go:182] Loaded profile config "functional-757678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0315 07:10:32.520072 3334026 cli_runner.go:164] Run: docker container inspect functional-757678 --format={{.State.Status}}
I0315 07:10:32.540064 3334026 ssh_runner.go:195] Run: systemctl --version
I0315 07:10:32.540154 3334026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-757678
I0315 07:10:32.562998 3334026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36695 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/functional-757678/id_rsa Username:docker}
I0315 07:10:32.667571 3334026 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-757678 image ls --format yaml --alsologtostderr:
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:b686025a1606131449f2f21ea42bc2dd26b6c962d7dd7e1c525944e9ad30d0ba
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-757678
size: "1006"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "30360149"
- id: sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "22001357"
- id: sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "17082307"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "25336339"
- id: sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests:
- docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9
repoTags:
- docker.io/library/nginx:alpine
size: "17601423"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "31582354"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53
repoDigests:
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "67216851"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-757678 image ls --format yaml --alsologtostderr:
I0315 07:10:32.234127 3333974 out.go:291] Setting OutFile to fd 1 ...
I0315 07:10:32.234340 3333974 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 07:10:32.234367 3333974 out.go:304] Setting ErrFile to fd 2...
I0315 07:10:32.234387 3333974 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 07:10:32.234677 3333974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
I0315 07:10:32.235489 3333974 config.go:182] Loaded profile config "functional-757678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0315 07:10:32.235730 3333974 config.go:182] Loaded profile config "functional-757678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0315 07:10:32.236269 3333974 cli_runner.go:164] Run: docker container inspect functional-757678 --format={{.State.Status}}
I0315 07:10:32.254040 3333974 ssh_runner.go:195] Run: systemctl --version
I0315 07:10:32.254102 3333974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-757678
I0315 07:10:32.279800 3333974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36695 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/functional-757678/id_rsa Username:docker}
I0315 07:10:32.376854 3333974 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-757678 ssh pgrep buildkitd: exit status 1 (292.86148ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image build -t localhost/my-image:functional-757678 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-757678 image build -t localhost/my-image:functional-757678 testdata/build --alsologtostderr: (2.123958863s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-757678 image build -t localhost/my-image:functional-757678 testdata/build --alsologtostderr:
I0315 07:10:33.105397 3334131 out.go:291] Setting OutFile to fd 1 ...
I0315 07:10:33.106660 3334131 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 07:10:33.106680 3334131 out.go:304] Setting ErrFile to fd 2...
I0315 07:10:33.106688 3334131 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 07:10:33.107033 3334131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
I0315 07:10:33.107772 3334131 config.go:182] Loaded profile config "functional-757678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0315 07:10:33.109803 3334131 config.go:182] Loaded profile config "functional-757678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0315 07:10:33.110359 3334131 cli_runner.go:164] Run: docker container inspect functional-757678 --format={{.State.Status}}
I0315 07:10:33.128312 3334131 ssh_runner.go:195] Run: systemctl --version
I0315 07:10:33.128377 3334131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-757678
I0315 07:10:33.145457 3334131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36695 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/functional-757678/id_rsa Username:docker}
I0315 07:10:33.239478 3334131 build_images.go:161] Building image from path: /tmp/build.1793185462.tar
I0315 07:10:33.239552 3334131 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0315 07:10:33.248365 3334131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1793185462.tar
I0315 07:10:33.251564 3334131 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1793185462.tar: stat -c "%s %y" /var/lib/minikube/build/build.1793185462.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1793185462.tar': No such file or directory
I0315 07:10:33.251591 3334131 ssh_runner.go:362] scp /tmp/build.1793185462.tar --> /var/lib/minikube/build/build.1793185462.tar (3072 bytes)
I0315 07:10:33.285446 3334131 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1793185462
I0315 07:10:33.295749 3334131 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1793185462 -xf /var/lib/minikube/build/build.1793185462.tar
I0315 07:10:33.306036 3334131 containerd.go:379] Building image: /var/lib/minikube/build/build.1793185462
I0315 07:10:33.306108 3334131 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1793185462 --local dockerfile=/var/lib/minikube/build/build.1793185462 --output type=image,name=localhost/my-image:functional-757678
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:561ed8af39a4e7b4e51a7233db2f0337150d12b47686e245e44dd8ae5a9c2bbc 0.0s done
#8 exporting config sha256:9166fa5ddc68e959d66060bf09028971f1562a91a567c5f14a7b7af16abfcf4e 0.0s done
#8 naming to localhost/my-image:functional-757678 done
#8 DONE 0.2s
I0315 07:10:35.120977 3334131 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1793185462 --local dockerfile=/var/lib/minikube/build/build.1793185462 --output type=image,name=localhost/my-image:functional-757678: (1.814838194s)
I0315 07:10:35.121112 3334131 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1793185462
I0315 07:10:35.130668 3334131 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1793185462.tar
I0315 07:10:35.140591 3334131 build_images.go:217] Built localhost/my-image:functional-757678 from /tmp/build.1793185462.tar
I0315 07:10:35.140620 3334131 build_images.go:133] succeeded building to: functional-757678
I0315 07:10:35.140625 3334131 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.539609024s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-757678
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image rm gcr.io/google-containers/addon-resizer:functional-757678 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-757678
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-757678 image save --daemon gcr.io/google-containers/addon-resizer:functional-757678 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-757678
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.65s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-757678
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-757678
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-757678
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (128.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-182730 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0315 07:10:58.672361 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-182730 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m7.294902771s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (128.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-182730 -- rollout status deployment/busybox: (2.943997647s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- exec busybox-5b5d89c9d6-dh4kg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- exec busybox-5b5d89c9d6-wfpws -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- exec busybox-5b5d89c9d6-xsxgq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- exec busybox-5b5d89c9d6-dh4kg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- exec busybox-5b5d89c9d6-wfpws -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- exec busybox-5b5d89c9d6-xsxgq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- exec busybox-5b5d89c9d6-dh4kg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- exec busybox-5b5d89c9d6-wfpws -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- exec busybox-5b5d89c9d6-xsxgq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- exec busybox-5b5d89c9d6-dh4kg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- exec busybox-5b5d89c9d6-dh4kg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- exec busybox-5b5d89c9d6-wfpws -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- exec busybox-5b5d89c9d6-wfpws -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- exec busybox-5b5d89c9d6-xsxgq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182730 -- exec busybox-5b5d89c9d6-xsxgq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-182730 -v=7 --alsologtostderr
E0315 07:13:14.829168 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-182730 -v=7 --alsologtostderr: (22.008814119s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-182730 status -v=7 --alsologtostderr: (1.008492018s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-182730 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp testdata/cp-test.txt ha-182730:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile80590296/001/cp-test_ha-182730.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730:/home/docker/cp-test.txt ha-182730-m02:/home/docker/cp-test_ha-182730_ha-182730-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m02 "sudo cat /home/docker/cp-test_ha-182730_ha-182730-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730:/home/docker/cp-test.txt ha-182730-m03:/home/docker/cp-test_ha-182730_ha-182730-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m03 "sudo cat /home/docker/cp-test_ha-182730_ha-182730-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730:/home/docker/cp-test.txt ha-182730-m04:/home/docker/cp-test_ha-182730_ha-182730-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m04 "sudo cat /home/docker/cp-test_ha-182730_ha-182730-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp testdata/cp-test.txt ha-182730-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile80590296/001/cp-test_ha-182730-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730-m02:/home/docker/cp-test.txt ha-182730:/home/docker/cp-test_ha-182730-m02_ha-182730.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730 "sudo cat /home/docker/cp-test_ha-182730-m02_ha-182730.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730-m02:/home/docker/cp-test.txt ha-182730-m03:/home/docker/cp-test_ha-182730-m02_ha-182730-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m03 "sudo cat /home/docker/cp-test_ha-182730-m02_ha-182730-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730-m02:/home/docker/cp-test.txt ha-182730-m04:/home/docker/cp-test_ha-182730-m02_ha-182730-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m04 "sudo cat /home/docker/cp-test_ha-182730-m02_ha-182730-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp testdata/cp-test.txt ha-182730-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile80590296/001/cp-test_ha-182730-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730-m03:/home/docker/cp-test.txt ha-182730:/home/docker/cp-test_ha-182730-m03_ha-182730.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730 "sudo cat /home/docker/cp-test_ha-182730-m03_ha-182730.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730-m03:/home/docker/cp-test.txt ha-182730-m02:/home/docker/cp-test_ha-182730-m03_ha-182730-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m02 "sudo cat /home/docker/cp-test_ha-182730-m03_ha-182730-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730-m03:/home/docker/cp-test.txt ha-182730-m04:/home/docker/cp-test_ha-182730-m03_ha-182730-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m04 "sudo cat /home/docker/cp-test_ha-182730-m03_ha-182730-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp testdata/cp-test.txt ha-182730-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile80590296/001/cp-test_ha-182730-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730-m04:/home/docker/cp-test.txt ha-182730:/home/docker/cp-test_ha-182730-m04_ha-182730.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730 "sudo cat /home/docker/cp-test_ha-182730-m04_ha-182730.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730-m04:/home/docker/cp-test.txt ha-182730-m02:/home/docker/cp-test_ha-182730-m04_ha-182730-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m02 "sudo cat /home/docker/cp-test_ha-182730-m04_ha-182730-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 cp ha-182730-m04:/home/docker/cp-test.txt ha-182730-m03:/home/docker/cp-test_ha-182730-m04_ha-182730-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 ssh -n ha-182730-m03 "sudo cat /home/docker/cp-test_ha-182730-m04_ha-182730-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 node stop m02 -v=7 --alsologtostderr
E0315 07:13:42.514299 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-182730 node stop m02 -v=7 --alsologtostderr: (12.169860684s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-182730 status -v=7 --alsologtostderr: exit status 7 (758.754796ms)

                                                
                                                
-- stdout --
	ha-182730
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-182730-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-182730-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-182730-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 07:13:50.557021 3349435 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:13:50.557251 3349435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:13:50.557264 3349435 out.go:304] Setting ErrFile to fd 2...
	I0315 07:13:50.557271 3349435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:13:50.557545 3349435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
	I0315 07:13:50.557814 3349435 out.go:298] Setting JSON to false
	I0315 07:13:50.557886 3349435 mustload.go:65] Loading cluster: ha-182730
	I0315 07:13:50.557960 3349435 notify.go:220] Checking for updates...
	I0315 07:13:50.558405 3349435 config.go:182] Loaded profile config "ha-182730": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0315 07:13:50.558427 3349435 status.go:255] checking status of ha-182730 ...
	I0315 07:13:50.559374 3349435 cli_runner.go:164] Run: docker container inspect ha-182730 --format={{.State.Status}}
	I0315 07:13:50.577752 3349435 status.go:330] ha-182730 host status = "Running" (err=<nil>)
	I0315 07:13:50.577792 3349435 host.go:66] Checking if "ha-182730" exists ...
	I0315 07:13:50.578186 3349435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-182730
	I0315 07:13:50.611790 3349435 host.go:66] Checking if "ha-182730" exists ...
	I0315 07:13:50.613104 3349435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 07:13:50.613154 3349435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-182730
	I0315 07:13:50.630866 3349435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36700 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/ha-182730/id_rsa Username:docker}
	I0315 07:13:50.728250 3349435 ssh_runner.go:195] Run: systemctl --version
	I0315 07:13:50.732520 3349435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:13:50.744337 3349435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:13:50.801580 3349435 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:76 SystemTime:2024-03-15 07:13:50.791378928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:13:50.802173 3349435 kubeconfig.go:125] found "ha-182730" server: "https://192.168.49.254:8443"
	I0315 07:13:50.802189 3349435 api_server.go:166] Checking apiserver status ...
	I0315 07:13:50.802229 3349435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:13:50.813966 3349435 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1401/cgroup
	I0315 07:13:50.823432 3349435 api_server.go:182] apiserver freezer: "9:freezer:/docker/58caeaabb5e21f14471102d7101e0ed03547fe461de72a85e9e2b415534c9061/kubepods/burstable/pod48a0e37fb4a1a0dc139e2efcce01616d/3ee44f129dde37c7626427df6472a725146df0b4986a22d5b401aba3b9f7fd69"
	I0315 07:13:50.823507 3349435 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/58caeaabb5e21f14471102d7101e0ed03547fe461de72a85e9e2b415534c9061/kubepods/burstable/pod48a0e37fb4a1a0dc139e2efcce01616d/3ee44f129dde37c7626427df6472a725146df0b4986a22d5b401aba3b9f7fd69/freezer.state
	I0315 07:13:50.832397 3349435 api_server.go:204] freezer state: "THAWED"
	I0315 07:13:50.832425 3349435 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0315 07:13:50.841144 3349435 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0315 07:13:50.841176 3349435 status.go:422] ha-182730 apiserver status = Running (err=<nil>)
	I0315 07:13:50.841188 3349435 status.go:257] ha-182730 status: &{Name:ha-182730 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 07:13:50.841205 3349435 status.go:255] checking status of ha-182730-m02 ...
	I0315 07:13:50.841503 3349435 cli_runner.go:164] Run: docker container inspect ha-182730-m02 --format={{.State.Status}}
	I0315 07:13:50.857068 3349435 status.go:330] ha-182730-m02 host status = "Stopped" (err=<nil>)
	I0315 07:13:50.857097 3349435 status.go:343] host is not running, skipping remaining checks
	I0315 07:13:50.857125 3349435 status.go:257] ha-182730-m02 status: &{Name:ha-182730-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 07:13:50.857147 3349435 status.go:255] checking status of ha-182730-m03 ...
	I0315 07:13:50.857437 3349435 cli_runner.go:164] Run: docker container inspect ha-182730-m03 --format={{.State.Status}}
	I0315 07:13:50.875250 3349435 status.go:330] ha-182730-m03 host status = "Running" (err=<nil>)
	I0315 07:13:50.875280 3349435 host.go:66] Checking if "ha-182730-m03" exists ...
	I0315 07:13:50.875629 3349435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-182730-m03
	I0315 07:13:50.896409 3349435 host.go:66] Checking if "ha-182730-m03" exists ...
	I0315 07:13:50.896723 3349435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 07:13:50.896783 3349435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-182730-m03
	I0315 07:13:50.919328 3349435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36710 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/ha-182730-m03/id_rsa Username:docker}
	I0315 07:13:51.016878 3349435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:13:51.030208 3349435 kubeconfig.go:125] found "ha-182730" server: "https://192.168.49.254:8443"
	I0315 07:13:51.030238 3349435 api_server.go:166] Checking apiserver status ...
	I0315 07:13:51.030281 3349435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:13:51.042423 3349435 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1338/cgroup
	I0315 07:13:51.052636 3349435 api_server.go:182] apiserver freezer: "9:freezer:/docker/442bb1ef35afd7caf130580453061ba7679f832c871e2458855926d20449f852/kubepods/burstable/pod8b02d53a4a88da94708adf1d8d598592/05d4a556d622a62492350ca5548df003d5783d978cf30ccfdff9698d5897082b"
	I0315 07:13:51.052721 3349435 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/442bb1ef35afd7caf130580453061ba7679f832c871e2458855926d20449f852/kubepods/burstable/pod8b02d53a4a88da94708adf1d8d598592/05d4a556d622a62492350ca5548df003d5783d978cf30ccfdff9698d5897082b/freezer.state
	I0315 07:13:51.062290 3349435 api_server.go:204] freezer state: "THAWED"
	I0315 07:13:51.062340 3349435 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0315 07:13:51.072832 3349435 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0315 07:13:51.072878 3349435 status.go:422] ha-182730-m03 apiserver status = Running (err=<nil>)
	I0315 07:13:51.072889 3349435 status.go:257] ha-182730-m03 status: &{Name:ha-182730-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 07:13:51.072908 3349435 status.go:255] checking status of ha-182730-m04 ...
	I0315 07:13:51.073241 3349435 cli_runner.go:164] Run: docker container inspect ha-182730-m04 --format={{.State.Status}}
	I0315 07:13:51.094021 3349435 status.go:330] ha-182730-m04 host status = "Running" (err=<nil>)
	I0315 07:13:51.094049 3349435 host.go:66] Checking if "ha-182730-m04" exists ...
	I0315 07:13:51.094389 3349435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-182730-m04
	I0315 07:13:51.113156 3349435 host.go:66] Checking if "ha-182730-m04" exists ...
	I0315 07:13:51.113537 3349435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 07:13:51.113598 3349435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-182730-m04
	I0315 07:13:51.131859 3349435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36715 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/ha-182730-m04/id_rsa Username:docker}
	I0315 07:13:51.228812 3349435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:13:51.240024 3349435 status.go:257] ha-182730-m04 status: &{Name:ha-182730-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-182730 node start m02 -v=7 --alsologtostderr: (17.214722392s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-182730 status -v=7 --alsologtostderr: (1.022565974s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (135.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-182730 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-182730 -v=7 --alsologtostderr
E0315 07:14:34.587239 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:14:34.593065 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:14:34.603336 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:14:34.623952 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:14:34.664570 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:14:34.745427 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:14:34.906263 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:14:35.226800 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:14:35.867513 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:14:37.147705 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:14:39.707980 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:14:44.828253 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-182730 -v=7 --alsologtostderr: (37.417635865s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-182730 --wait=true -v=7 --alsologtostderr
E0315 07:14:55.069150 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:15:15.550221 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:15:56.510627 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-182730 --wait=true -v=7 --alsologtostderr: (1m38.059352248s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-182730
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (135.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-182730 node delete m03 -v=7 --alsologtostderr: (10.525403715s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-182730 stop -v=7 --alsologtostderr: (36.106714471s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-182730 status -v=7 --alsologtostderr: exit status 7 (167.247645ms)

                                                
                                                
-- stdout --
	ha-182730
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-182730-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-182730-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 07:17:14.994538 3362959 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:17:15.004419 3362959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:17:15.004438 3362959 out.go:304] Setting ErrFile to fd 2...
	I0315 07:17:15.004446 3362959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:17:15.005009 3362959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
	I0315 07:17:15.012671 3362959 out.go:298] Setting JSON to false
	I0315 07:17:15.012707 3362959 mustload.go:65] Loading cluster: ha-182730
	I0315 07:17:15.013476 3362959 config.go:182] Loaded profile config "ha-182730": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0315 07:17:15.013491 3362959 status.go:255] checking status of ha-182730 ...
	I0315 07:17:15.014255 3362959 notify.go:220] Checking for updates...
	I0315 07:17:15.016502 3362959 cli_runner.go:164] Run: docker container inspect ha-182730 --format={{.State.Status}}
	I0315 07:17:15.053136 3362959 status.go:330] ha-182730 host status = "Stopped" (err=<nil>)
	I0315 07:17:15.053187 3362959 status.go:343] host is not running, skipping remaining checks
	I0315 07:17:15.053195 3362959 status.go:257] ha-182730 status: &{Name:ha-182730 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 07:17:15.053256 3362959 status.go:255] checking status of ha-182730-m02 ...
	I0315 07:17:15.053742 3362959 cli_runner.go:164] Run: docker container inspect ha-182730-m02 --format={{.State.Status}}
	I0315 07:17:15.079655 3362959 status.go:330] ha-182730-m02 host status = "Stopped" (err=<nil>)
	I0315 07:17:15.079694 3362959 status.go:343] host is not running, skipping remaining checks
	I0315 07:17:15.079702 3362959 status.go:257] ha-182730-m02 status: &{Name:ha-182730-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 07:17:15.079727 3362959 status.go:255] checking status of ha-182730-m04 ...
	I0315 07:17:15.080062 3362959 cli_runner.go:164] Run: docker container inspect ha-182730-m04 --format={{.State.Status}}
	I0315 07:17:15.098767 3362959 status.go:330] ha-182730-m04 host status = "Stopped" (err=<nil>)
	I0315 07:17:15.098793 3362959 status.go:343] host is not running, skipping remaining checks
	I0315 07:17:15.098801 3362959 status.go:257] ha-182730-m04 status: &{Name:ha-182730-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (67.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-182730 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0315 07:17:18.432659 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:18:14.829082 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-182730 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m6.984468625s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (67.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (40.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-182730 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-182730 --control-plane -v=7 --alsologtostderr: (39.079394374s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-182730 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-182730 status -v=7 --alsologtostderr: (1.040774584s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (40.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-998223 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0315 07:19:34.587655 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:20:02.275161 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-998223 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (59.666500023s)
--- PASS: TestJSONOutput/start/Command (59.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-998223 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-998223 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-998223 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-998223 --output=json --user=testUser: (5.776168872s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-637720 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-637720 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.70554ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c18d0391-1a24-4de1-92b3-f253711dbd3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-637720] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"81965ff5-3394-4296-8550-d7128e7c928c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18213"}}
	{"specversion":"1.0","id":"f567ff13-75ba-4b47-b5fe-62ef77873f93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4f8803a0-6518-4fa8-b610-6e6e35c27490","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig"}}
	{"specversion":"1.0","id":"f52f0fa0-f492-4cf4-ad3f-3b2801dbe620","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube"}}
	{"specversion":"1.0","id":"a54b400c-5a68-4451-86c9-b803b615b3df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0043bb37-4a01-423e-9df0-53fe37a2e574","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"da6b9f12-c5d4-4b01-a88c-028ed7c9d09d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-637720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-637720
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-771731 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-771731 --network=: (37.409306771s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-771731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-771731
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-771731: (2.069624408s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.50s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-805640 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-805640 --network=bridge: (34.484557555s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-805640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-805640
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-805640: (1.972543585s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.48s)

                                                
                                    
x
+
TestKicExistingNetwork (34.64s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-481834 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-481834 --network=existing-network: (32.493736324s)
helpers_test.go:175: Cleaning up "existing-network-481834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-481834
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-481834: (1.999617839s)
--- PASS: TestKicExistingNetwork (34.64s)

                                                
                                    
x
+
TestKicCustomSubnet (34.02s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-673963 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-673963 --subnet=192.168.60.0/24: (31.897036027s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-673963 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-673963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-673963
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-673963: (2.107518201s)
--- PASS: TestKicCustomSubnet (34.02s)

                                                
                                    
x
+
TestKicStaticIP (35.76s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-967469 --static-ip=192.168.200.200
E0315 07:23:14.829025 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-967469 --static-ip=192.168.200.200: (33.443683413s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-967469 ip
helpers_test.go:175: Cleaning up "static-ip-967469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-967469
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-967469: (2.172694612s)
--- PASS: TestKicStaticIP (35.76s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (69.55s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-278813 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-278813 --driver=docker  --container-runtime=containerd: (29.893209897s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-282194 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-282194 --driver=docker  --container-runtime=containerd: (34.211134532s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-278813
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-282194
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-282194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-282194
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-282194: (1.931442028s)
helpers_test.go:175: Cleaning up "first-278813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-278813
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-278813: (2.247950409s)
--- PASS: TestMinikubeProfile (69.55s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-001924 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0315 07:24:34.587281 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:24:37.874594 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-001924 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.811459647s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-001924 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-015290 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-015290 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.116123326s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-015290 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-001924 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-001924 --alsologtostderr -v=5: (1.616720127s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-015290 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-015290
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-015290: (1.210617187s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.26s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-015290
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-015290: (6.261338376s)
--- PASS: TestMountStart/serial/RestartStopped (7.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-015290 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (77.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-652142 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-652142 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m16.78828516s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (77.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (10.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652142 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652142 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-652142 -- rollout status deployment/busybox: (3.23363099s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652142 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652142 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652142 -- exec busybox-5b5d89c9d6-2dsj6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652142 -- exec busybox-5b5d89c9d6-nbdvd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-652142 -- exec busybox-5b5d89c9d6-nbdvd -- nslookup kubernetes.io: (5.220279549s)
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652142 -- exec busybox-5b5d89c9d6-2dsj6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652142 -- exec busybox-5b5d89c9d6-nbdvd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652142 -- exec busybox-5b5d89c9d6-2dsj6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652142 -- exec busybox-5b5d89c9d6-nbdvd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (10.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652142 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652142 -- exec busybox-5b5d89c9d6-2dsj6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652142 -- exec busybox-5b5d89c9d6-2dsj6 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652142 -- exec busybox-5b5d89c9d6-nbdvd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652142 -- exec busybox-5b5d89c9d6-nbdvd -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-652142 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-652142 -v 3 --alsologtostderr: (16.333421174s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.04s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-652142 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 cp testdata/cp-test.txt multinode-652142:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 cp multinode-652142:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2870420652/001/cp-test_multinode-652142.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 cp multinode-652142:/home/docker/cp-test.txt multinode-652142-m02:/home/docker/cp-test_multinode-652142_multinode-652142-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142-m02 "sudo cat /home/docker/cp-test_multinode-652142_multinode-652142-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 cp multinode-652142:/home/docker/cp-test.txt multinode-652142-m03:/home/docker/cp-test_multinode-652142_multinode-652142-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142-m03 "sudo cat /home/docker/cp-test_multinode-652142_multinode-652142-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 cp testdata/cp-test.txt multinode-652142-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 cp multinode-652142-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2870420652/001/cp-test_multinode-652142-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 cp multinode-652142-m02:/home/docker/cp-test.txt multinode-652142:/home/docker/cp-test_multinode-652142-m02_multinode-652142.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142 "sudo cat /home/docker/cp-test_multinode-652142-m02_multinode-652142.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 cp multinode-652142-m02:/home/docker/cp-test.txt multinode-652142-m03:/home/docker/cp-test_multinode-652142-m02_multinode-652142-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142-m03 "sudo cat /home/docker/cp-test_multinode-652142-m02_multinode-652142-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 cp testdata/cp-test.txt multinode-652142-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 cp multinode-652142-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2870420652/001/cp-test_multinode-652142-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 cp multinode-652142-m03:/home/docker/cp-test.txt multinode-652142:/home/docker/cp-test_multinode-652142-m03_multinode-652142.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142 "sudo cat /home/docker/cp-test_multinode-652142-m03_multinode-652142.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 cp multinode-652142-m03:/home/docker/cp-test.txt multinode-652142-m02:/home/docker/cp-test_multinode-652142-m03_multinode-652142-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 ssh -n multinode-652142-m02 "sudo cat /home/docker/cp-test_multinode-652142-m03_multinode-652142-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-652142 node stop m03: (1.233733186s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-652142 status: exit status 7 (542.704169ms)

                                                
                                                
-- stdout --
	multinode-652142
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-652142-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-652142-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-652142 status --alsologtostderr: exit status 7 (536.472537ms)

                                                
                                                
-- stdout --
	multinode-652142
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-652142-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-652142-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 07:27:01.077857 3414641 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:27:01.078237 3414641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:27:01.078349 3414641 out.go:304] Setting ErrFile to fd 2...
	I0315 07:27:01.078375 3414641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:27:01.079874 3414641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
	I0315 07:27:01.080232 3414641 out.go:298] Setting JSON to false
	I0315 07:27:01.080467 3414641 mustload.go:65] Loading cluster: multinode-652142
	I0315 07:27:01.081473 3414641 notify.go:220] Checking for updates...
	I0315 07:27:01.083026 3414641 config.go:182] Loaded profile config "multinode-652142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0315 07:27:01.083048 3414641 status.go:255] checking status of multinode-652142 ...
	I0315 07:27:01.084153 3414641 cli_runner.go:164] Run: docker container inspect multinode-652142 --format={{.State.Status}}
	I0315 07:27:01.101559 3414641 status.go:330] multinode-652142 host status = "Running" (err=<nil>)
	I0315 07:27:01.101582 3414641 host.go:66] Checking if "multinode-652142" exists ...
	I0315 07:27:01.101889 3414641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-652142
	I0315 07:27:01.117761 3414641 host.go:66] Checking if "multinode-652142" exists ...
	I0315 07:27:01.118090 3414641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 07:27:01.118136 3414641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-652142
	I0315 07:27:01.149986 3414641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36820 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/multinode-652142/id_rsa Username:docker}
	I0315 07:27:01.248563 3414641 ssh_runner.go:195] Run: systemctl --version
	I0315 07:27:01.253085 3414641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:27:01.265042 3414641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:27:01.322243 3414641 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:66 SystemTime:2024-03-15 07:27:01.311794549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:27:01.323063 3414641 kubeconfig.go:125] found "multinode-652142" server: "https://192.168.58.2:8443"
	I0315 07:27:01.323148 3414641 api_server.go:166] Checking apiserver status ...
	I0315 07:27:01.323229 3414641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 07:27:01.334994 3414641 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1433/cgroup
	I0315 07:27:01.345183 3414641 api_server.go:182] apiserver freezer: "9:freezer:/docker/bdcb68a25f35ea6e8e26327f0ad8247ab6e19dcfe23f828f4c97d1101a543a83/kubepods/burstable/podd9f2933fe7c015959c234ecffa1fc143/fdfb8d2a8242ed440c89e3195e3fc1ef40eafebbceb8fa4600cb52459b03bd47"
	I0315 07:27:01.345267 3414641 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bdcb68a25f35ea6e8e26327f0ad8247ab6e19dcfe23f828f4c97d1101a543a83/kubepods/burstable/podd9f2933fe7c015959c234ecffa1fc143/fdfb8d2a8242ed440c89e3195e3fc1ef40eafebbceb8fa4600cb52459b03bd47/freezer.state
	I0315 07:27:01.353993 3414641 api_server.go:204] freezer state: "THAWED"
	I0315 07:27:01.354024 3414641 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0315 07:27:01.362788 3414641 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0315 07:27:01.362817 3414641 status.go:422] multinode-652142 apiserver status = Running (err=<nil>)
	I0315 07:27:01.362830 3414641 status.go:257] multinode-652142 status: &{Name:multinode-652142 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 07:27:01.362848 3414641 status.go:255] checking status of multinode-652142-m02 ...
	I0315 07:27:01.363195 3414641 cli_runner.go:164] Run: docker container inspect multinode-652142-m02 --format={{.State.Status}}
	I0315 07:27:01.381210 3414641 status.go:330] multinode-652142-m02 host status = "Running" (err=<nil>)
	I0315 07:27:01.381234 3414641 host.go:66] Checking if "multinode-652142-m02" exists ...
	I0315 07:27:01.381571 3414641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-652142-m02
	I0315 07:27:01.399324 3414641 host.go:66] Checking if "multinode-652142-m02" exists ...
	I0315 07:27:01.399812 3414641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 07:27:01.399870 3414641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-652142-m02
	I0315 07:27:01.419610 3414641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36825 SSHKeyPath:/home/jenkins/minikube-integration/18213-3295134/.minikube/machines/multinode-652142-m02/id_rsa Username:docker}
	I0315 07:27:01.516038 3414641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 07:27:01.527409 3414641 status.go:257] multinode-652142-m02 status: &{Name:multinode-652142-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0315 07:27:01.527452 3414641 status.go:255] checking status of multinode-652142-m03 ...
	I0315 07:27:01.527789 3414641 cli_runner.go:164] Run: docker container inspect multinode-652142-m03 --format={{.State.Status}}
	I0315 07:27:01.543439 3414641 status.go:330] multinode-652142-m03 host status = "Stopped" (err=<nil>)
	I0315 07:27:01.543464 3414641 status.go:343] host is not running, skipping remaining checks
	I0315 07:27:01.543471 3414641 status.go:257] multinode-652142-m03 status: &{Name:multinode-652142-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-652142 node start m03 -v=7 --alsologtostderr: (8.88736343s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (86.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-652142
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-652142
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-652142: (24.93145193s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-652142 --wait=true -v=8 --alsologtostderr
E0315 07:28:14.829100 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-652142 --wait=true -v=8 --alsologtostderr: (1m1.163960531s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-652142
--- PASS: TestMultiNode/serial/RestartKeepsNodes (86.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-652142 node delete m03: (4.796868811s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-652142 stop: (23.907907472s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-652142 status: exit status 7 (94.783929ms)

                                                
                                                
-- stdout --
	multinode-652142
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-652142-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-652142 status --alsologtostderr: exit status 7 (96.507105ms)

                                                
                                                
-- stdout --
	multinode-652142
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-652142-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 07:29:07.014751 3422214 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:29:07.015003 3422214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:29:07.015016 3422214 out.go:304] Setting ErrFile to fd 2...
	I0315 07:29:07.015021 3422214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:29:07.015310 3422214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
	I0315 07:29:07.015529 3422214 out.go:298] Setting JSON to false
	I0315 07:29:07.015581 3422214 mustload.go:65] Loading cluster: multinode-652142
	I0315 07:29:07.015651 3422214 notify.go:220] Checking for updates...
	I0315 07:29:07.016027 3422214 config.go:182] Loaded profile config "multinode-652142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0315 07:29:07.016046 3422214 status.go:255] checking status of multinode-652142 ...
	I0315 07:29:07.016595 3422214 cli_runner.go:164] Run: docker container inspect multinode-652142 --format={{.State.Status}}
	I0315 07:29:07.033758 3422214 status.go:330] multinode-652142 host status = "Stopped" (err=<nil>)
	I0315 07:29:07.033783 3422214 status.go:343] host is not running, skipping remaining checks
	I0315 07:29:07.033791 3422214 status.go:257] multinode-652142 status: &{Name:multinode-652142 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 07:29:07.033822 3422214 status.go:255] checking status of multinode-652142-m02 ...
	I0315 07:29:07.034126 3422214 cli_runner.go:164] Run: docker container inspect multinode-652142-m02 --format={{.State.Status}}
	I0315 07:29:07.049750 3422214 status.go:330] multinode-652142-m02 host status = "Stopped" (err=<nil>)
	I0315 07:29:07.049775 3422214 status.go:343] host is not running, skipping remaining checks
	I0315 07:29:07.049783 3422214 status.go:257] multinode-652142-m02 status: &{Name:multinode-652142-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-652142 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0315 07:29:34.587690 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-652142 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.19799901s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652142 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.89s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-652142
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-652142-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-652142-m02 --driver=docker  --container-runtime=containerd: exit status 14 (92.068366ms)

                                                
                                                
-- stdout --
	* [multinode-652142-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-652142-m02' is duplicated with machine name 'multinode-652142-m02' in profile 'multinode-652142'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-652142-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-652142-m03 --driver=docker  --container-runtime=containerd: (35.306644167s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-652142
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-652142: exit status 80 (351.125589ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-652142 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-652142-m03 already exists in multinode-652142-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-652142-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-652142-m03: (2.001650791s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.82s)

                                                
                                    
x
+
TestPreload (110.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-838009 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0315 07:30:57.635404 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-838009 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m13.298980043s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-838009 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-838009 image pull gcr.io/k8s-minikube/busybox: (1.292992973s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-838009
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-838009: (12.062681154s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-838009 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-838009 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.973374978s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-838009 image list
helpers_test.go:175: Cleaning up "test-preload-838009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-838009
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-838009: (2.38685018s)
--- PASS: TestPreload (110.36s)

                                                
                                    
x
+
TestScheduledStopUnix (112.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-040269 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-040269 --memory=2048 --driver=docker  --container-runtime=containerd: (36.186624817s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-040269 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-040269 -n scheduled-stop-040269
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-040269 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-040269 --cancel-scheduled
E0315 07:33:14.829530 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-040269 -n scheduled-stop-040269
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-040269
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-040269 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-040269
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-040269: exit status 7 (79.212527ms)

                                                
                                                
-- stdout --
	scheduled-stop-040269
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-040269 -n scheduled-stop-040269
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-040269 -n scheduled-stop-040269: exit status 7 (75.061477ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-040269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-040269
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-040269: (5.091181893s)
--- PASS: TestScheduledStopUnix (112.91s)

                                                
                                    
x
+
TestInsufficientStorage (10.64s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-154888 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-154888 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.134551335s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"45812597-d2b8-49d9-8282-859616f78780","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-154888] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e00c1bfe-e511-4677-bb68-f38aae932ebf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18213"}}
	{"specversion":"1.0","id":"b8f35fad-cd17-4101-b9ee-32ee6e529d3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5746ae6d-e537-4f60-98d1-adf9423be119","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig"}}
	{"specversion":"1.0","id":"34d4e764-1feb-4514-973c-904072d0d181","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube"}}
	{"specversion":"1.0","id":"e9a11dbf-ab9a-438c-98be-125970bcef6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"37a658f0-1887-4630-bcf4-9a9cef5e4466","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"edc68d95-f857-4e91-9b3b-db86069cd2f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d1c57758-ea74-41be-a03d-713f3e265568","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"de7ee176-21fc-41fb-8653-ad6ab5b5e693","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c3e6d8b-1009-4111-90d6-0006ddd3f5fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"971ce6d6-7b44-4ca0-9682-e8e1ed97ebe9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-154888\" primary control-plane node in \"insufficient-storage-154888\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7098faa-bcaf-4e51-805a-5ce0ab3ccbac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1710284843-18375 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"387f8463-8238-44b3-ba1e-b2375c29e0a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0352a34b-bc8e-43d0-8309-aac21fadbbdd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-154888 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-154888 --output=json --layout=cluster: exit status 7 (305.105332ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-154888","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-154888","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:34:30.466561 3439812 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-154888" does not appear in /home/jenkins/minikube-integration/18213-3295134/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-154888 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-154888 --output=json --layout=cluster: exit status 7 (290.843134ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-154888","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-154888","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 07:34:30.758161 3439865 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-154888" does not appear in /home/jenkins/minikube-integration/18213-3295134/kubeconfig
	E0315 07:34:30.768530 3439865 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/insufficient-storage-154888/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-154888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-154888
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-154888: (1.910330352s)
--- PASS: TestInsufficientStorage (10.64s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (98.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3742479796 start -p running-upgrade-146303 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3742479796 start -p running-upgrade-146303 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (50.074278125s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-146303 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-146303 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.848325834s)
helpers_test.go:175: Cleaning up "running-upgrade-146303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-146303
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-146303: (2.769889301s)
--- PASS: TestRunningBinaryUpgrade (98.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (196.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-519385 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-519385 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m3.951060384s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-519385
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-519385: (1.358942335s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-519385 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-519385 status --format={{.Host}}: exit status 7 (97.689488ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-519385 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-519385 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m48.766494196s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-519385 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-519385 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-519385 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (106.529412ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-519385] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-519385
	    minikube start -p kubernetes-upgrade-519385 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5193852 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-519385 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-519385 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-519385 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (19.263319618s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-519385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-519385
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-519385: (2.41799038s)
--- PASS: TestKubernetesUpgrade (196.14s)

                                                
                                    
x
+
TestMissingContainerUpgrade (160.31s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3646741703 start -p missing-upgrade-851924 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3646741703 start -p missing-upgrade-851924 --memory=2200 --driver=docker  --container-runtime=containerd: (1m14.036158101s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-851924
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-851924: (10.626301167s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-851924
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-851924 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0315 07:38:14.829573 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-851924 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m12.194147491s)
helpers_test.go:175: Cleaning up "missing-upgrade-851924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-851924
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-851924: (2.314457163s)
--- PASS: TestMissingContainerUpgrade (160.31s)

                                                
                                    
x
+
TestPause/serial/Start (68.35s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-893887 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-893887 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m8.349393394s)
--- PASS: TestPause/serial/Start (68.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-680469 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-680469 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (122.589944ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-680469] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-680469 --driver=docker  --container-runtime=containerd
E0315 07:34:34.587821 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-680469 --driver=docker  --container-runtime=containerd: (41.867404589s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-680469 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-680469 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-680469 --no-kubernetes --driver=docker  --container-runtime=containerd: (13.723661068s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-680469 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-680469 status -o json: exit status 2 (310.63226ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-680469","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-680469
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-680469: (2.00038482s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-680469 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-680469 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.027371998s)
--- PASS: TestNoKubernetes/serial/Start (8.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-680469 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-680469 "sudo systemctl is-active --quiet service kubelet": exit status 1 (312.650166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-680469
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-680469: (1.295762532s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.38s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-893887 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-893887 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.367340412s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-680469 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-680469 --driver=docker  --container-runtime=containerd: (7.556156205s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.56s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-893887 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-893887 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-893887 --output=json --layout=cluster: exit status 2 (371.255782ms)

                                                
                                                
-- stdout --
	{"Name":"pause-893887","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-893887","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-893887 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-680469 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-680469 "sudo systemctl is-active --quiet service kubelet": exit status 1 (483.855633ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.48s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.25s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-893887 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-893887 --alsologtostderr -v=5: (1.246777467s)
--- PASS: TestPause/serial/PauseAgain (1.25s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.92s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-893887 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-893887 --alsologtostderr -v=5: (2.924493993s)
--- PASS: TestPause/serial/DeletePaused (2.92s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-893887
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-893887: exit status 1 (24.467999ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-893887: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (90.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.404675528 start -p stopped-upgrade-177897 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.404675528 start -p stopped-upgrade-177897 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (40.013678677s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.404675528 -p stopped-upgrade-177897 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.404675528 -p stopped-upgrade-177897 stop: (1.806392348s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-177897 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0315 07:39:34.587937 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-177897 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (49.084803838s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (90.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-177897
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-177897: (1.205346522s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-070881 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-070881 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (310.304762ms)

                                                
                                                
-- stdout --
	* [false-070881] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18213
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 07:40:53.985788 3474601 out.go:291] Setting OutFile to fd 1 ...
	I0315 07:40:53.986283 3474601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:40:53.986322 3474601 out.go:304] Setting ErrFile to fd 2...
	I0315 07:40:53.986344 3474601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 07:40:53.987140 3474601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18213-3295134/.minikube/bin
	I0315 07:40:53.987738 3474601 out.go:298] Setting JSON to false
	I0315 07:40:53.988910 3474601 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":58998,"bootTime":1710429456,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0315 07:40:53.989036 3474601 start.go:139] virtualization:  
	I0315 07:40:53.992903 3474601 out.go:177] * [false-070881] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0315 07:40:53.996536 3474601 out.go:177]   - MINIKUBE_LOCATION=18213
	I0315 07:40:53.998982 3474601 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 07:40:53.996785 3474601 notify.go:220] Checking for updates...
	I0315 07:40:54.015914 3474601 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18213-3295134/kubeconfig
	I0315 07:40:54.018853 3474601 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18213-3295134/.minikube
	I0315 07:40:54.021113 3474601 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0315 07:40:54.023437 3474601 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 07:40:54.027553 3474601 config.go:182] Loaded profile config "force-systemd-env-770281": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0315 07:40:54.027801 3474601 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 07:40:54.078057 3474601 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0315 07:40:54.078170 3474601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 07:40:54.187540 3474601 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:48 SystemTime:2024-03-15 07:40:54.172735051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0315 07:40:54.187676 3474601 docker.go:295] overlay module found
	I0315 07:40:54.192354 3474601 out.go:177] * Using the docker driver based on user configuration
	I0315 07:40:54.195223 3474601 start.go:297] selected driver: docker
	I0315 07:40:54.195245 3474601 start.go:901] validating driver "docker" against <nil>
	I0315 07:40:54.195260 3474601 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 07:40:54.199994 3474601 out.go:177] 
	W0315 07:40:54.201931 3474601 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0315 07:40:54.204459 3474601 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-070881 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-070881

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-070881

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-070881

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-070881

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-070881

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-070881

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-070881

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-070881

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-070881

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-070881

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-070881

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-070881" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-070881" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-070881

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-070881"

                                                
                                                
----------------------- debugLogs end: false-070881 [took: 5.707833463s] --------------------------------
helpers_test.go:175: Cleaning up "false-070881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-070881
--- PASS: TestNetworkPlugins/group/false (6.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (174.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-591842 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0315 07:43:14.829517 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
E0315 07:44:34.587729 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-591842 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m54.548703055s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (174.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-484299 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-484299 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (59.084120629s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-591842 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [32091b7c-f196-4259-976c-3ccb0b700563] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [32091b7c-f196-4259-976c-3ccb0b700563] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004565151s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-591842 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-591842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-591842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.30786118s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-591842 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-591842 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-591842 --alsologtostderr -v=3: (12.464346495s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-591842 -n old-k8s-version-591842
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-591842 -n old-k8s-version-591842: exit status 7 (76.657625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-591842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-484299 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a1b8d735-8414-4d33-a40d-a6b1bf9bac4f] Pending
helpers_test.go:344: "busybox" [a1b8d735-8414-4d33-a40d-a6b1bf9bac4f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a1b8d735-8414-4d33-a40d-a6b1bf9bac4f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003343778s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-484299 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-484299 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-484299 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.186308339s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-484299 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-484299 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-484299 --alsologtostderr -v=3: (12.164936816s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-484299 -n default-k8s-diff-port-484299
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-484299 -n default-k8s-diff-port-484299: exit status 7 (114.419969ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-484299 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-484299 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0315 07:47:37.635824 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:48:14.829166 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
E0315 07:49:34.587840 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-484299 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m27.245437929s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-484299 -n default-k8s-diff-port-484299
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-528kq" [b500448f-00b9-484c-915e-2df1e440933c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004279502s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-528kq" [b500448f-00b9-484c-915e-2df1e440933c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004845785s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-484299 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-484299 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-484299 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-484299 -n default-k8s-diff-port-484299
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-484299 -n default-k8s-diff-port-484299: exit status 2 (335.826527ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-484299 -n default-k8s-diff-port-484299
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-484299 -n default-k8s-diff-port-484299: exit status 2 (333.65569ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-484299 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-484299 -n default-k8s-diff-port-484299
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-484299 -n default-k8s-diff-port-484299
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (63.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-722347 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-722347 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m3.769646113s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (63.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7fkx2" [1b3d6a15-efee-4b43-950a-a3e85d4030e8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004640398s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7fkx2" [1b3d6a15-efee-4b43-950a-a3e85d4030e8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004194496s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-591842 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-591842 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-591842 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-591842 -n old-k8s-version-591842
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-591842 -n old-k8s-version-591842: exit status 2 (344.273109ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-591842 -n old-k8s-version-591842
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-591842 -n old-k8s-version-591842: exit status 2 (349.968934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-591842 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-591842 -n old-k8s-version-591842
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-591842 -n old-k8s-version-591842
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-727559 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-727559 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m15.318461681s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-722347 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5d4ad736-6efb-4c9d-9f84-baa25ad23819] Pending
helpers_test.go:344: "busybox" [5d4ad736-6efb-4c9d-9f84-baa25ad23819] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5d4ad736-6efb-4c9d-9f84-baa25ad23819] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003420552s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-722347 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-722347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-722347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.464480237s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-722347 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-722347 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-722347 --alsologtostderr -v=3: (12.435860067s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-722347 -n embed-certs-722347
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-722347 -n embed-certs-722347: exit status 7 (106.234267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-722347 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (270.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-722347 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0315 07:53:14.829889 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-722347 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m29.737580934s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-722347 -n embed-certs-722347
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (270.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-727559 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dd6ff3d3-4894-4ece-9f16-c9776f11ecc0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dd6ff3d3-4894-4ece-9f16-c9776f11ecc0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004326422s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-727559 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-727559 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-727559 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.064732287s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-727559 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-727559 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-727559 --alsologtostderr -v=3: (12.177234097s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-727559 -n no-preload-727559
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-727559 -n no-preload-727559: exit status 7 (106.458016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-727559 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (269.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-727559 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0315 07:54:34.587633 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 07:55:08.134243 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:55:08.140152 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:55:08.150483 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:55:08.170788 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:55:08.211052 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:55:08.291351 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:55:08.451800 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:55:08.772409 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:55:09.413580 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:55:10.694037 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:55:13.255225 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:55:18.375764 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:55:28.615975 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:55:49.098858 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:55:56.789196 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
E0315 07:55:56.794490 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
E0315 07:55:56.804805 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
E0315 07:55:56.825228 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
E0315 07:55:56.865573 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
E0315 07:55:56.945839 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
E0315 07:55:57.106287 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
E0315 07:55:57.426886 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
E0315 07:55:58.067927 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
E0315 07:55:59.348766 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
E0315 07:56:01.909435 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
E0315 07:56:07.030298 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
E0315 07:56:17.270514 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
E0315 07:56:30.059472 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:56:37.750726 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-727559 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (4m28.850227186s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-727559 -n no-preload-727559
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (269.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x8c8t" [c39656f3-24ab-4571-8f8d-07c77a1ed86d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004225878s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x8c8t" [c39656f3-24ab-4571-8f8d-07c77a1ed86d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005900704s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-722347 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-722347 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-722347 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-722347 -n embed-certs-722347
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-722347 -n embed-certs-722347: exit status 2 (333.157995ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-722347 -n embed-certs-722347
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-722347 -n embed-certs-722347: exit status 2 (332.848399ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-722347 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-722347 -n embed-certs-722347
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-722347 -n embed-certs-722347
E0315 07:57:18.711664 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-908064 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0315 07:57:51.980273 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 07:57:57.879544 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-908064 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (43.969823878s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-908064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-908064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.611899112s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-908064 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-908064 --alsologtostderr -v=3: (1.278195824s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-908064 -n newest-cni-908064
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-908064 -n newest-cni-908064: exit status 7 (81.533193ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-908064 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-908064 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-908064 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (16.284625249s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-908064 -n newest-cni-908064
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7sh4c" [77fda850-8198-49fa-9481-006bc33756b9] Running
E0315 07:58:14.829467 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006335387s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7sh4c" [77fda850-8198-49fa-9481-006bc33756b9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004504612s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-727559 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-727559 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-727559 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-727559 --alsologtostderr -v=1: (1.329291255s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-727559 -n no-preload-727559
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-727559 -n no-preload-727559: exit status 2 (510.66174ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-727559 -n no-preload-727559
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-727559 -n no-preload-727559: exit status 2 (512.135741ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-727559 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-727559 --alsologtostderr -v=1: (1.094127041s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-727559 -n no-preload-727559
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-727559 -n no-preload-727559
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-908064 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-908064 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-908064 --alsologtostderr -v=1: (1.407393858s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-908064 -n newest-cni-908064
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-908064 -n newest-cni-908064: exit status 2 (550.314337ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-908064 -n newest-cni-908064
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-908064 -n newest-cni-908064: exit status 2 (492.959737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-908064 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-908064 -n newest-cni-908064
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-908064 -n newest-cni-908064
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.67s)
E0315 08:04:17.636813 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 08:04:34.588012 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
E0315 08:04:41.118355 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/no-preload-727559/client.crt: no such file or directory
E0315 08:04:43.696841 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/kindnet-070881/client.crt: no such file or directory
E0315 08:04:43.702112 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/kindnet-070881/client.crt: no such file or directory
E0315 08:04:43.708758 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/auto-070881/client.crt: no such file or directory
E0315 08:04:43.712334 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/kindnet-070881/client.crt: no such file or directory
E0315 08:04:43.714903 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/auto-070881/client.crt: no such file or directory
E0315 08:04:43.725117 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/auto-070881/client.crt: no such file or directory
E0315 08:04:43.733301 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/kindnet-070881/client.crt: no such file or directory
E0315 08:04:43.745827 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/auto-070881/client.crt: no such file or directory
E0315 08:04:43.774387 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/kindnet-070881/client.crt: no such file or directory
E0315 08:04:43.786603 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/auto-070881/client.crt: no such file or directory
E0315 08:04:43.854963 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/kindnet-070881/client.crt: no such file or directory
E0315 08:04:43.867320 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/auto-070881/client.crt: no such file or directory
E0315 08:04:44.015796 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/kindnet-070881/client.crt: no such file or directory
E0315 08:04:44.028070 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/auto-070881/client.crt: no such file or directory
E0315 08:04:44.335993 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/kindnet-070881/client.crt: no such file or directory
E0315 08:04:44.349214 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/auto-070881/client.crt: no such file or directory
E0315 08:04:44.977122 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/kindnet-070881/client.crt: no such file or directory
E0315 08:04:44.990408 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/auto-070881/client.crt: no such file or directory
E0315 08:04:46.257809 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/kindnet-070881/client.crt: no such file or directory
E0315 08:04:46.270986 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/auto-070881/client.crt: no such file or directory
E0315 08:04:48.818212 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/kindnet-070881/client.crt: no such file or directory
E0315 08:04:48.831484 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/auto-070881/client.crt: no such file or directory
E0315 08:04:53.939397 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/kindnet-070881/client.crt: no such file or directory
E0315 08:04:53.952601 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/auto-070881/client.crt: no such file or directory
E0315 08:05:04.180230 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/kindnet-070881/client.crt: no such file or directory
E0315 08:05:04.193356 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/auto-070881/client.crt: no such file or directory
E0315 08:05:08.134251 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 08:05:24.660683 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/kindnet-070881/client.crt: no such file or directory
E0315 08:05:24.673904 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/auto-070881/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-070881 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-070881 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m11.548888573s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-070881 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0315 07:58:40.633393 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
E0315 07:59:34.587811 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/functional-757678/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-070881 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m9.954215184s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-070881 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-070881 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5g8xt" [de6278b9-121d-4ae0-b26f-7e8f3c0d515b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5g8xt" [de6278b9-121d-4ae0-b26f-7e8f3c0d515b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.005218855s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ck27c" [dab7c313-fb68-4111-8429-bd056933da28] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004675329s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-070881 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-070881 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7pzbz" [726741ce-8d52-4b63-a424-85f35149501b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7pzbz" [726741ce-8d52-4b63-a424-85f35149501b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004375302s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-070881 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-070881 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-070881 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-070881 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-070881 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-070881 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-070881 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-070881 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m24.376293381s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-070881 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0315 08:00:35.820534 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/old-k8s-version-591842/client.crt: no such file or directory
E0315 08:00:56.789366 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
E0315 08:01:24.474088 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/default-k8s-diff-port-484299/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-070881 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m1.426975574s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-070881 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-070881 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7h4xw" [609c8d27-ff94-42a9-90b4-fd87d8971e07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7h4xw" [609c8d27-ff94-42a9-90b4-fd87d8971e07] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004653332s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-070881 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-070881 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-070881 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hzll9" [781923b2-5a23-4a37-845e-78b53a2dcbbe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.011484269s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-070881 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-070881 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vmcpj" [7bc09665-e466-4daf-a83f-71deaf0a9b8c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vmcpj" [7bc09665-e466-4daf-a83f-71deaf0a9b8c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.008028476s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-070881 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-070881 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-070881 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-070881 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-070881 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m31.78402815s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-070881 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0315 08:03:14.828928 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/addons-639618/client.crt: no such file or directory
E0315 08:03:19.184531 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/no-preload-727559/client.crt: no such file or directory
E0315 08:03:19.189705 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/no-preload-727559/client.crt: no such file or directory
E0315 08:03:19.199943 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/no-preload-727559/client.crt: no such file or directory
E0315 08:03:19.220123 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/no-preload-727559/client.crt: no such file or directory
E0315 08:03:19.260410 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/no-preload-727559/client.crt: no such file or directory
E0315 08:03:19.340788 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/no-preload-727559/client.crt: no such file or directory
E0315 08:03:19.501212 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/no-preload-727559/client.crt: no such file or directory
E0315 08:03:19.821660 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/no-preload-727559/client.crt: no such file or directory
E0315 08:03:20.462104 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/no-preload-727559/client.crt: no such file or directory
E0315 08:03:21.742734 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/no-preload-727559/client.crt: no such file or directory
E0315 08:03:24.302987 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/no-preload-727559/client.crt: no such file or directory
E0315 08:03:29.423599 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/no-preload-727559/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-070881 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m3.108840168s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tbwn6" [1b382856-3942-4082-b81f-b0390ab6d6f2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0038161s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-070881 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-070881 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fpw9h" [02890803-d340-4479-b6ce-592504853704] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fpw9h" [02890803-d340-4479-b6ce-592504853704] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003741785s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-070881 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-070881 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dphx2" [b6f36fb2-7cf6-4d27-bbc4-b1cf1723a532] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0315 08:03:39.664178 3300550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/no-preload-727559/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-dphx2" [b6f36fb2-7cf6-4d27-bbc4-b1cf1723a532] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.007459198s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-070881 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-070881 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-070881 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-070881 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-070881 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-070881 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-070881 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-070881 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m23.293118824s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-070881 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-070881 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-m9j98" [118a147d-a236-4ecc-9329-df366f48c7db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-m9j98" [118a147d-a236-4ecc-9329-df366f48c7db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003519157s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-070881 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-070881 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-070881 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (31/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-230205 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-230205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-230205
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-775605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-775605
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-070881 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-070881

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-070881

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-070881

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-070881

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-070881

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-070881

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-070881

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-070881

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-070881

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-070881

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-070881

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-070881" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-070881" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18213-3295134/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 15 Mar 2024 07:40:49 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-flag-509862
contexts:
- context:
cluster: force-systemd-flag-509862
extensions:
- extension:
last-update: Fri, 15 Mar 2024 07:40:49 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: force-systemd-flag-509862
name: force-systemd-flag-509862
current-context: force-systemd-flag-509862
kind: Config
preferences: {}
users:
- name: force-systemd-flag-509862
user:
client-certificate: /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/force-systemd-flag-509862/client.crt
client-key: /home/jenkins/minikube-integration/18213-3295134/.minikube/profiles/force-systemd-flag-509862/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-070881

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-070881"

                                                
                                                
----------------------- debugLogs end: kubenet-070881 [took: 5.079911783s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-070881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-070881
--- SKIP: TestNetworkPlugins/group/kubenet (5.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-070881 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-070881" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-070881" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-070881" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-070881

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-070881" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-070881"

                                                
                                                
----------------------- debugLogs end: cilium-070881 [took: 5.864599842s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-070881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-070881
--- SKIP: TestNetworkPlugins/group/cilium (6.05s)

                                                
                                    
Copied to clipboard