Test Report: Docker_Linux_crio_arm64 18966

                    
                      6c595620fab5adb75898ef5927d180f0ecb72463:2024-05-28:34666
                    
                

Test fail (3/328)

Order failed test Duration
30 TestAddons/parallel/Ingress 165.85
32 TestAddons/parallel/MetricsServer 366.32
302 TestStartStop/group/old-k8s-version/serial/SecondStart 372.69
x
+
TestAddons/parallel/Ingress (165.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-504712 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-504712 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-504712 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [567aa897-c410-44e1-800d-a99aad2c5c32] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [567aa897-c410-44e1-800d-a99aad2c5c32] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003717187s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-504712 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-504712 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.961861344s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-504712 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-504712 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:299: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.074377827s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:301: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:305: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-504712 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-504712 addons disable ingress-dns --alsologtostderr -v=1: (1.440554701s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-504712 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-504712 addons disable ingress --alsologtostderr -v=1: (7.745327424s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-504712
helpers_test.go:235: (dbg) docker inspect addons-504712:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "85f22f353ce0c8e0c69785bc168aeadfc8ca833607ef585f03d7b1f534c00b55",
	        "Created": "2024-05-28T21:31:29.068232415Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1356311,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-28T21:31:29.377870683Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:acea75078737755d2f999491dfa245ea1d1040bffc73283b8c9ba9ff1fde89b5",
	        "ResolvConfPath": "/var/lib/docker/containers/85f22f353ce0c8e0c69785bc168aeadfc8ca833607ef585f03d7b1f534c00b55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/85f22f353ce0c8e0c69785bc168aeadfc8ca833607ef585f03d7b1f534c00b55/hostname",
	        "HostsPath": "/var/lib/docker/containers/85f22f353ce0c8e0c69785bc168aeadfc8ca833607ef585f03d7b1f534c00b55/hosts",
	        "LogPath": "/var/lib/docker/containers/85f22f353ce0c8e0c69785bc168aeadfc8ca833607ef585f03d7b1f534c00b55/85f22f353ce0c8e0c69785bc168aeadfc8ca833607ef585f03d7b1f534c00b55-json.log",
	        "Name": "/addons-504712",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-504712:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-504712",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1920e00c1df99f4235a564e576a7b6842927faf5f87245cfbc623c3a9deeb1f1-init/diff:/var/lib/docker/overlay2/41cb90b313a958e97d6c40ed76425369b134e98a770fd8f601707592b588c01d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1920e00c1df99f4235a564e576a7b6842927faf5f87245cfbc623c3a9deeb1f1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1920e00c1df99f4235a564e576a7b6842927faf5f87245cfbc623c3a9deeb1f1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1920e00c1df99f4235a564e576a7b6842927faf5f87245cfbc623c3a9deeb1f1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-504712",
	                "Source": "/var/lib/docker/volumes/addons-504712/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-504712",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-504712",
	                "name.minikube.sigs.k8s.io": "addons-504712",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "423b96190bfe57e45830045019e4a6e241934393bb3cf692800588c6d4a84066",
	            "SandboxKey": "/var/run/docker/netns/423b96190bfe",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34299"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34298"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34295"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34297"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34296"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-504712": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "fea6bb4deb3b35b9f5deef4dc891bc567bf387551481e264d8c46eac4277403b",
	                    "EndpointID": "a87268d110dd188941d37a81f6013d79545058df629ca07aac8b97c34dc6e299",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-504712",
	                        "85f22f353ce0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-504712 -n addons-504712
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-504712 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-504712 logs -n 25: (1.462896726s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-064906   | jenkins | v1.33.1 | 28 May 24 21:27 UTC |                     |
	|         | -p download-only-064906              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 28 May 24 21:30 UTC | 28 May 24 21:30 UTC |
	| delete  | -p download-only-064906              | download-only-064906   | jenkins | v1.33.1 | 28 May 24 21:30 UTC | 28 May 24 21:30 UTC |
	| start   | -o=json --download-only              | download-only-723265   | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | -p download-only-723265              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 28 May 24 21:31 UTC | 28 May 24 21:31 UTC |
	| delete  | -p download-only-723265              | download-only-723265   | jenkins | v1.33.1 | 28 May 24 21:31 UTC | 28 May 24 21:31 UTC |
	| delete  | -p download-only-064906              | download-only-064906   | jenkins | v1.33.1 | 28 May 24 21:31 UTC | 28 May 24 21:31 UTC |
	| delete  | -p download-only-723265              | download-only-723265   | jenkins | v1.33.1 | 28 May 24 21:31 UTC | 28 May 24 21:31 UTC |
	| start   | --download-only -p                   | download-docker-966905 | jenkins | v1.33.1 | 28 May 24 21:31 UTC |                     |
	|         | download-docker-966905               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-966905            | download-docker-966905 | jenkins | v1.33.1 | 28 May 24 21:31 UTC | 28 May 24 21:31 UTC |
	| start   | --download-only -p                   | binary-mirror-950707   | jenkins | v1.33.1 | 28 May 24 21:31 UTC |                     |
	|         | binary-mirror-950707                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38569               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-950707              | binary-mirror-950707   | jenkins | v1.33.1 | 28 May 24 21:31 UTC | 28 May 24 21:31 UTC |
	| addons  | enable dashboard -p                  | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:31 UTC |                     |
	|         | addons-504712                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:31 UTC |                     |
	|         | addons-504712                        |                        |         |         |                     |                     |
	| start   | -p addons-504712 --wait=true         | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:31 UTC | 28 May 24 21:34 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:34 UTC | 28 May 24 21:34 UTC |
	|         | -p addons-504712                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-504712 ip                     | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:34 UTC | 28 May 24 21:34 UTC |
	| addons  | addons-504712 addons disable         | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:34 UTC | 28 May 24 21:34 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-504712 addons                 | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-504712 addons                 | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | addons-504712                        |                        |         |         |                     |                     |
	| ssh     | addons-504712 ssh curl -s            | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:35 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-504712 ip                     | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:37 UTC | 28 May 24 21:37 UTC |
	| addons  | addons-504712 addons disable         | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:38 UTC | 28 May 24 21:38 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-504712 addons disable         | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:38 UTC | 28 May 24 21:38 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:31:04
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:31:04.624360 1355836 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:31:04.624794 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:31:04.624810 1355836 out.go:304] Setting ErrFile to fd 2...
	I0528 21:31:04.624816 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:31:04.625108 1355836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
	I0528 21:31:04.625624 1355836 out.go:298] Setting JSON to false
	I0528 21:31:04.626534 1355836 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":18813,"bootTime":1716913052,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0528 21:31:04.626608 1355836 start.go:139] virtualization:  
	I0528 21:31:04.631210 1355836 out.go:177] * [addons-504712] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0528 21:31:04.633562 1355836 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:31:04.633611 1355836 notify.go:220] Checking for updates...
	I0528 21:31:04.636077 1355836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:31:04.638572 1355836 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	I0528 21:31:04.640672 1355836 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	I0528 21:31:04.642899 1355836 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0528 21:31:04.644998 1355836 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:31:04.647252 1355836 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:31:04.667144 1355836 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 21:31:04.667259 1355836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:31:04.733765 1355836 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2024-05-28 21:31:04.723790769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:31:04.733876 1355836 docker.go:295] overlay module found
	I0528 21:31:04.736144 1355836 out.go:177] * Using the docker driver based on user configuration
	I0528 21:31:04.738188 1355836 start.go:297] selected driver: docker
	I0528 21:31:04.738203 1355836 start.go:901] validating driver "docker" against <nil>
	I0528 21:31:04.738215 1355836 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:31:04.738844 1355836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:31:04.791323 1355836 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2024-05-28 21:31:04.782648082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:31:04.791493 1355836 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 21:31:04.791719 1355836 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:31:04.793910 1355836 out.go:177] * Using Docker driver with root privileges
	I0528 21:31:04.795590 1355836 cni.go:84] Creating CNI manager for ""
	I0528 21:31:04.795618 1355836 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0528 21:31:04.795631 1355836 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0528 21:31:04.795713 1355836 start.go:340] cluster config:
	{Name:addons-504712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-504712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:31:04.799475 1355836 out.go:177] * Starting "addons-504712" primary control-plane node in "addons-504712" cluster
	I0528 21:31:04.801179 1355836 cache.go:121] Beginning downloading kic base image for docker with crio
	I0528 21:31:04.803182 1355836 out.go:177] * Pulling base image v0.0.44-1716228441-18934 ...
	I0528 21:31:04.805091 1355836 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:31:04.805147 1355836 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon
	I0528 21:31:04.805157 1355836 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4
	I0528 21:31:04.805252 1355836 cache.go:56] Caching tarball of preloaded images
	I0528 21:31:04.805343 1355836 preload.go:173] Found /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0528 21:31:04.805357 1355836 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 21:31:04.805728 1355836 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/config.json ...
	I0528 21:31:04.805755 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/config.json: {Name:mk5719af5c4179174a3b9bff9067a58daf99fa48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:04.819950 1355836 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 to local cache
	I0528 21:31:04.820054 1355836 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local cache directory
	I0528 21:31:04.820072 1355836 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local cache directory, skipping pull
	I0528 21:31:04.820077 1355836 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 exists in cache, skipping pull
	I0528 21:31:04.820083 1355836 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 as a tarball
	I0528 21:31:04.820089 1355836 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 from local cache
	I0528 21:31:21.353851 1355836 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 from cached tarball
	I0528 21:31:21.353911 1355836 cache.go:194] Successfully downloaded all kic artifacts
	I0528 21:31:21.353940 1355836 start.go:360] acquireMachinesLock for addons-504712: {Name:mk8939d43682ac81e4cc316266fc1208eccf5792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:31:21.354138 1355836 start.go:364] duration metric: took 180.122µs to acquireMachinesLock for "addons-504712"
	I0528 21:31:21.354171 1355836 start.go:93] Provisioning new machine with config: &{Name:addons-504712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-504712 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 21:31:21.354247 1355836 start.go:125] createHost starting for "" (driver="docker")
	I0528 21:31:21.357355 1355836 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0528 21:31:21.357617 1355836 start.go:159] libmachine.API.Create for "addons-504712" (driver="docker")
	I0528 21:31:21.357663 1355836 client.go:168] LocalClient.Create starting
	I0528 21:31:21.357787 1355836 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem
	I0528 21:31:21.802430 1355836 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/cert.pem
	I0528 21:31:22.683183 1355836 cli_runner.go:164] Run: docker network inspect addons-504712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0528 21:31:22.697789 1355836 cli_runner.go:211] docker network inspect addons-504712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0528 21:31:22.697874 1355836 network_create.go:281] running [docker network inspect addons-504712] to gather additional debugging logs...
	I0528 21:31:22.697895 1355836 cli_runner.go:164] Run: docker network inspect addons-504712
	W0528 21:31:22.713425 1355836 cli_runner.go:211] docker network inspect addons-504712 returned with exit code 1
	I0528 21:31:22.713454 1355836 network_create.go:284] error running [docker network inspect addons-504712]: docker network inspect addons-504712: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-504712 not found
	I0528 21:31:22.713467 1355836 network_create.go:286] output of [docker network inspect addons-504712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-504712 not found
	
	** /stderr **
	I0528 21:31:22.713564 1355836 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0528 21:31:22.729001 1355836 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40006d13c0}
	I0528 21:31:22.729044 1355836 network_create.go:124] attempt to create docker network addons-504712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0528 21:31:22.729101 1355836 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-504712 addons-504712
	I0528 21:31:22.786258 1355836 network_create.go:108] docker network addons-504712 192.168.49.0/24 created
	I0528 21:31:22.786289 1355836 kic.go:121] calculated static IP "192.168.49.2" for the "addons-504712" container
	I0528 21:31:22.786359 1355836 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0528 21:31:22.800230 1355836 cli_runner.go:164] Run: docker volume create addons-504712 --label name.minikube.sigs.k8s.io=addons-504712 --label created_by.minikube.sigs.k8s.io=true
	I0528 21:31:22.816487 1355836 oci.go:103] Successfully created a docker volume addons-504712
	I0528 21:31:22.816577 1355836 cli_runner.go:164] Run: docker run --rm --name addons-504712-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-504712 --entrypoint /usr/bin/test -v addons-504712:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 -d /var/lib
	I0528 21:31:24.884865 1355836 cli_runner.go:217] Completed: docker run --rm --name addons-504712-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-504712 --entrypoint /usr/bin/test -v addons-504712:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 -d /var/lib: (2.068237618s)
	I0528 21:31:24.884895 1355836 oci.go:107] Successfully prepared a docker volume addons-504712
	I0528 21:31:24.884928 1355836 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:31:24.884949 1355836 kic.go:194] Starting extracting preloaded images to volume ...
	I0528 21:31:24.885031 1355836 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-504712:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 -I lz4 -xf /preloaded.tar -C /extractDir
	I0528 21:31:29.005516 1355836 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-504712:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 -I lz4 -xf /preloaded.tar -C /extractDir: (4.120441729s)
	I0528 21:31:29.005565 1355836 kic.go:203] duration metric: took 4.120598509s to extract preloaded images to volume ...
	W0528 21:31:29.005723 1355836 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0528 21:31:29.005846 1355836 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0528 21:31:29.053901 1355836 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-504712 --name addons-504712 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-504712 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-504712 --network addons-504712 --ip 192.168.49.2 --volume addons-504712:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862
	I0528 21:31:29.389481 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Running}}
	I0528 21:31:29.417478 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:31:29.439926 1355836 cli_runner.go:164] Run: docker exec addons-504712 stat /var/lib/dpkg/alternatives/iptables
	I0528 21:31:29.503537 1355836 oci.go:144] the created container "addons-504712" has a running status.
	I0528 21:31:29.503565 1355836 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa...
	I0528 21:31:29.860846 1355836 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0528 21:31:29.889501 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:31:29.926184 1355836 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0528 21:31:29.926204 1355836 kic_runner.go:114] Args: [docker exec --privileged addons-504712 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0528 21:31:30.008673 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:31:30.032311 1355836 machine.go:94] provisionDockerMachine start ...
	I0528 21:31:30.032431 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:30.055272 1355836 main.go:141] libmachine: Using SSH client type: native
	I0528 21:31:30.055571 1355836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34299 <nil> <nil>}
	I0528 21:31:30.055590 1355836 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:31:30.222097 1355836 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-504712
	
	I0528 21:31:30.222124 1355836 ubuntu.go:169] provisioning hostname "addons-504712"
	I0528 21:31:30.222231 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:30.244355 1355836 main.go:141] libmachine: Using SSH client type: native
	I0528 21:31:30.244601 1355836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34299 <nil> <nil>}
	I0528 21:31:30.244619 1355836 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-504712 && echo "addons-504712" | sudo tee /etc/hostname
	I0528 21:31:30.387661 1355836 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-504712
	
	I0528 21:31:30.387743 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:30.414178 1355836 main.go:141] libmachine: Using SSH client type: native
	I0528 21:31:30.414438 1355836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34299 <nil> <nil>}
	I0528 21:31:30.414461 1355836 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-504712' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-504712/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-504712' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:31:30.538155 1355836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:31:30.538181 1355836 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18966-1349783/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-1349783/.minikube}
	I0528 21:31:30.538201 1355836 ubuntu.go:177] setting up certificates
	I0528 21:31:30.538213 1355836 provision.go:84] configureAuth start
	I0528 21:31:30.538278 1355836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-504712
	I0528 21:31:30.556984 1355836 provision.go:143] copyHostCerts
	I0528 21:31:30.557084 1355836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.pem (1082 bytes)
	I0528 21:31:30.557222 1355836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-1349783/.minikube/cert.pem (1123 bytes)
	I0528 21:31:30.557298 1355836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-1349783/.minikube/key.pem (1675 bytes)
	I0528 21:31:30.557361 1355836 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca-key.pem org=jenkins.addons-504712 san=[127.0.0.1 192.168.49.2 addons-504712 localhost minikube]
	I0528 21:31:30.760799 1355836 provision.go:177] copyRemoteCerts
	I0528 21:31:30.760895 1355836 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:31:30.760947 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:30.778218 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:31:30.866885 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 21:31:30.890485 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0528 21:31:30.913436 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 21:31:30.937074 1355836 provision.go:87] duration metric: took 398.847464ms to configureAuth
	I0528 21:31:30.937099 1355836 ubuntu.go:193] setting minikube options for container-runtime
	I0528 21:31:30.937289 1355836 config.go:182] Loaded profile config "addons-504712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:31:30.937400 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:30.954145 1355836 main.go:141] libmachine: Using SSH client type: native
	I0528 21:31:30.954385 1355836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34299 <nil> <nil>}
	I0528 21:31:30.954399 1355836 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:31:31.179523 1355836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:31:31.179549 1355836 machine.go:97] duration metric: took 1.147212839s to provisionDockerMachine
	I0528 21:31:31.179561 1355836 client.go:171] duration metric: took 9.821888429s to LocalClient.Create
	I0528 21:31:31.179575 1355836 start.go:167] duration metric: took 9.821959017s to libmachine.API.Create "addons-504712"
	I0528 21:31:31.179583 1355836 start.go:293] postStartSetup for "addons-504712" (driver="docker")
	I0528 21:31:31.179599 1355836 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:31:31.179667 1355836 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:31:31.179721 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:31.196693 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:31:31.290895 1355836 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:31:31.293870 1355836 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0528 21:31:31.293909 1355836 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0528 21:31:31.293920 1355836 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0528 21:31:31.293927 1355836 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0528 21:31:31.293937 1355836 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1349783/.minikube/addons for local assets ...
	I0528 21:31:31.294013 1355836 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1349783/.minikube/files for local assets ...
	I0528 21:31:31.294067 1355836 start.go:296] duration metric: took 114.473166ms for postStartSetup
	I0528 21:31:31.294381 1355836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-504712
	I0528 21:31:31.311810 1355836 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/config.json ...
	I0528 21:31:31.312101 1355836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 21:31:31.312153 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:31.329124 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:31:31.415086 1355836 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0528 21:31:31.420029 1355836 start.go:128] duration metric: took 10.065765363s to createHost
	I0528 21:31:31.420054 1355836 start.go:83] releasing machines lock for "addons-504712", held for 10.065900941s
	I0528 21:31:31.420134 1355836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-504712
	I0528 21:31:31.436280 1355836 ssh_runner.go:195] Run: cat /version.json
	I0528 21:31:31.436361 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:31.436630 1355836 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:31:31.436691 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:31.454910 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:31:31.466538 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:31:31.545515 1355836 ssh_runner.go:195] Run: systemctl --version
	I0528 21:31:31.670589 1355836 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:31:31.808978 1355836 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 21:31:31.812905 1355836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:31:31.832336 1355836 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0528 21:31:31.832412 1355836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:31:31.864776 1355836 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0528 21:31:31.864803 1355836 start.go:494] detecting cgroup driver to use...
	I0528 21:31:31.864837 1355836 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0528 21:31:31.864889 1355836 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:31:31.881256 1355836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:31:31.891873 1355836 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:31:31.891934 1355836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:31:31.905610 1355836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:31:31.920099 1355836 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:31:32.012711 1355836 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:31:32.109248 1355836 docker.go:233] disabling docker service ...
	I0528 21:31:32.109325 1355836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:31:32.129383 1355836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:31:32.141824 1355836 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:31:32.235118 1355836 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:31:32.328467 1355836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:31:32.340976 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:31:32.358434 1355836 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 21:31:32.358527 1355836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:32.368988 1355836 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:31:32.369105 1355836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:32.379900 1355836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:32.389352 1355836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:32.399403 1355836 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:31:32.408194 1355836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:32.417688 1355836 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:32.432977 1355836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:32.442766 1355836 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:31:32.451589 1355836 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:31:32.460415 1355836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:31:32.542937 1355836 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:31:32.646817 1355836 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:31:32.646951 1355836 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:31:32.650442 1355836 start.go:562] Will wait 60s for crictl version
	I0528 21:31:32.650541 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:31:32.653850 1355836 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:31:32.692108 1355836 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0528 21:31:32.692261 1355836 ssh_runner.go:195] Run: crio --version
	I0528 21:31:32.729043 1355836 ssh_runner.go:195] Run: crio --version
	I0528 21:31:32.772142 1355836 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.24.6 ...
	I0528 21:31:32.774138 1355836 cli_runner.go:164] Run: docker network inspect addons-504712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0528 21:31:32.788708 1355836 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0528 21:31:32.792224 1355836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:31:32.802778 1355836 kubeadm.go:877] updating cluster {Name:addons-504712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-504712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 21:31:32.802896 1355836 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:31:32.802960 1355836 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:31:32.874662 1355836 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:31:32.874685 1355836 crio.go:433] Images already preloaded, skipping extraction
	I0528 21:31:32.874741 1355836 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:31:32.916262 1355836 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:31:32.916283 1355836 cache_images.go:84] Images are preloaded, skipping loading
	I0528 21:31:32.916292 1355836 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 crio true true} ...
	I0528 21:31:32.916394 1355836 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-504712 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-504712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 21:31:32.916481 1355836 ssh_runner.go:195] Run: crio config
	I0528 21:31:32.967208 1355836 cni.go:84] Creating CNI manager for ""
	I0528 21:31:32.967271 1355836 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0528 21:31:32.967297 1355836 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 21:31:32.967322 1355836 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-504712 NodeName:addons-504712 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 21:31:32.967466 1355836 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-504712"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 21:31:32.967540 1355836 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 21:31:32.976306 1355836 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 21:31:32.976377 1355836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 21:31:32.984814 1355836 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0528 21:31:33.005491 1355836 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 21:31:33.025315 1355836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0528 21:31:33.043351 1355836 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0528 21:31:33.046765 1355836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:31:33.057474 1355836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:31:33.139311 1355836 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:31:33.153488 1355836 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712 for IP: 192.168.49.2
	I0528 21:31:33.153507 1355836 certs.go:194] generating shared ca certs ...
	I0528 21:31:33.153524 1355836 certs.go:226] acquiring lock for ca certs: {Name:mk3b01431a293453662fa80a6161920f23c6c736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:33.154117 1355836 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.key
	I0528 21:31:33.631690 1355836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.crt ...
	I0528 21:31:33.631722 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.crt: {Name:mkc01af482e04252e8a6c75b788228b3ac6e96f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:33.631920 1355836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.key ...
	I0528 21:31:33.631938 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.key: {Name:mk2f271def928866fcca6ed23a4f3348d3f75bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:33.632034 1355836 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.key
	I0528 21:31:34.346824 1355836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.crt ...
	I0528 21:31:34.346859 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.crt: {Name:mk77a18aabf743cb34ab7b26a8e82ac7fae4a46f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:34.347570 1355836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.key ...
	I0528 21:31:34.347586 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.key: {Name:mkca254a7fa65d1c5bd938defe82ebb6eb5a889c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:34.348129 1355836 certs.go:256] generating profile certs ...
	I0528 21:31:34.348197 1355836 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.key
	I0528 21:31:34.348216 1355836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt with IP's: []
	I0528 21:31:34.995925 1355836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt ...
	I0528 21:31:34.995955 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: {Name:mk6b9e61238b6af500ff68f79693da49d282f1e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:34.996154 1355836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.key ...
	I0528 21:31:34.996167 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.key: {Name:mk3a85eda1c7d1272e9add9aa5b7e3909a551fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:34.996252 1355836 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.key.251e9af4
	I0528 21:31:34.996273 1355836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.crt.251e9af4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0528 21:31:35.791823 1355836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.crt.251e9af4 ...
	I0528 21:31:35.791859 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.crt.251e9af4: {Name:mk365cd1516347d909b57c47c42d612209b5e1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:35.792447 1355836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.key.251e9af4 ...
	I0528 21:31:35.792467 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.key.251e9af4: {Name:mkaa8f3881f76cafcd1dec37848671f41f0b9728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:35.792615 1355836 certs.go:381] copying /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.crt.251e9af4 -> /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.crt
	I0528 21:31:35.792703 1355836 certs.go:385] copying /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.key.251e9af4 -> /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.key
	I0528 21:31:35.792763 1355836 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.key
	I0528 21:31:35.792784 1355836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.crt with IP's: []
	I0528 21:31:36.191454 1355836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.crt ...
	I0528 21:31:36.191486 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.crt: {Name:mk30a886ec00f9560d299af22afab40b8fde72e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:36.192103 1355836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.key ...
	I0528 21:31:36.192122 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.key: {Name:mka8f3fad8ffa49d497c8786b8a3e7dfbf7d378f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:36.192324 1355836 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca-key.pem (1679 bytes)
	I0528 21:31:36.192368 1355836 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem (1082 bytes)
	I0528 21:31:36.192397 1355836 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/cert.pem (1123 bytes)
	I0528 21:31:36.192427 1355836 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/key.pem (1675 bytes)
	I0528 21:31:36.193069 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 21:31:36.218556 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 21:31:36.243527 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 21:31:36.271607 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 21:31:36.298769 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0528 21:31:36.324045 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0528 21:31:36.348902 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 21:31:36.373028 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 21:31:36.397059 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 21:31:36.420915 1355836 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 21:31:36.439109 1355836 ssh_runner.go:195] Run: openssl version
	I0528 21:31:36.444411 1355836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 21:31:36.453758 1355836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:31:36.457248 1355836 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 21:31 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:31:36.457347 1355836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:31:36.464264 1355836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 21:31:36.473977 1355836 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:31:36.477551 1355836 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 21:31:36.477648 1355836 kubeadm.go:391] StartCluster: {Name:addons-504712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-504712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:31:36.477750 1355836 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 21:31:36.477812 1355836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:31:36.515349 1355836 cri.go:89] found id: ""
	I0528 21:31:36.515417 1355836 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 21:31:36.524318 1355836 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 21:31:36.533088 1355836 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0528 21:31:36.533208 1355836 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:31:36.542117 1355836 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:31:36.542140 1355836 kubeadm.go:156] found existing configuration files:
	
	I0528 21:31:36.542212 1355836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:31:36.550881 1355836 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:31:36.550993 1355836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:31:36.559608 1355836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:31:36.568227 1355836 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:31:36.568337 1355836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:31:36.576733 1355836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:31:36.585440 1355836 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:31:36.585533 1355836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:31:36.594209 1355836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:31:36.602829 1355836 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:31:36.602951 1355836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:31:36.611536 1355836 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0528 21:31:36.657742 1355836 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 21:31:36.657887 1355836 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 21:31:36.697841 1355836 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0528 21:31:36.697957 1355836 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1062-aws
	I0528 21:31:36.698015 1355836 kubeadm.go:309] OS: Linux
	I0528 21:31:36.698099 1355836 kubeadm.go:309] CGROUPS_CPU: enabled
	I0528 21:31:36.698161 1355836 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0528 21:31:36.698232 1355836 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0528 21:31:36.698294 1355836 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0528 21:31:36.698362 1355836 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0528 21:31:36.698430 1355836 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0528 21:31:36.698497 1355836 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0528 21:31:36.698560 1355836 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0528 21:31:36.698619 1355836 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0528 21:31:36.761080 1355836 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 21:31:36.761291 1355836 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 21:31:36.761438 1355836 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 21:31:36.979169 1355836 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 21:31:36.984387 1355836 out.go:204]   - Generating certificates and keys ...
	I0528 21:31:36.984583 1355836 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 21:31:36.984698 1355836 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 21:31:37.779367 1355836 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 21:31:38.070410 1355836 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0528 21:31:38.823189 1355836 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0528 21:31:39.578654 1355836 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0528 21:31:39.829168 1355836 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0528 21:31:39.829483 1355836 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-504712 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0528 21:31:40.375507 1355836 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0528 21:31:40.375853 1355836 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-504712 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0528 21:31:41.102455 1355836 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 21:31:41.324553 1355836 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 21:31:42.529890 1355836 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0528 21:31:42.530355 1355836 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 21:31:43.674110 1355836 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 21:31:44.104765 1355836 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 21:31:44.597127 1355836 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 21:31:44.823223 1355836 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 21:31:46.054941 1355836 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 21:31:46.055624 1355836 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 21:31:46.060414 1355836 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 21:31:46.062828 1355836 out.go:204]   - Booting up control plane ...
	I0528 21:31:46.062928 1355836 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 21:31:46.063003 1355836 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 21:31:46.063074 1355836 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 21:31:46.073402 1355836 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 21:31:46.074394 1355836 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 21:31:46.074634 1355836 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 21:31:46.165083 1355836 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 21:31:46.165169 1355836 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 21:31:47.165950 1355836 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.000859149s
	I0528 21:31:47.166060 1355836 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 21:31:53.167864 1355836 kubeadm.go:309] [api-check] The API server is healthy after 6.001986229s
	I0528 21:31:53.187019 1355836 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 21:31:53.199104 1355836 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 21:31:53.227782 1355836 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 21:31:53.227976 1355836 kubeadm.go:309] [mark-control-plane] Marking the node addons-504712 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 21:31:53.238637 1355836 kubeadm.go:309] [bootstrap-token] Using token: nlgfel.bhcj8g7dheyimwds
	I0528 21:31:53.240691 1355836 out.go:204]   - Configuring RBAC rules ...
	I0528 21:31:53.240847 1355836 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 21:31:53.245362 1355836 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 21:31:53.252933 1355836 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 21:31:53.256250 1355836 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 21:31:53.259911 1355836 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 21:31:53.265937 1355836 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 21:31:53.574848 1355836 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 21:31:54.024644 1355836 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 21:31:54.575594 1355836 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 21:31:54.577029 1355836 kubeadm.go:309] 
	I0528 21:31:54.577116 1355836 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 21:31:54.577128 1355836 kubeadm.go:309] 
	I0528 21:31:54.577218 1355836 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 21:31:54.577229 1355836 kubeadm.go:309] 
	I0528 21:31:54.577254 1355836 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 21:31:54.577318 1355836 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 21:31:54.577392 1355836 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 21:31:54.577400 1355836 kubeadm.go:309] 
	I0528 21:31:54.577469 1355836 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 21:31:54.577479 1355836 kubeadm.go:309] 
	I0528 21:31:54.577541 1355836 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 21:31:54.577548 1355836 kubeadm.go:309] 
	I0528 21:31:54.577627 1355836 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 21:31:54.577711 1355836 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 21:31:54.577794 1355836 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 21:31:54.577806 1355836 kubeadm.go:309] 
	I0528 21:31:54.577896 1355836 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 21:31:54.577972 1355836 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 21:31:54.577979 1355836 kubeadm.go:309] 
	I0528 21:31:54.578086 1355836 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token nlgfel.bhcj8g7dheyimwds \
	I0528 21:31:54.578208 1355836 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dff89f9d96955ea12e5a34678503b154cb1ba84632124852cf6ec75aeb79db1c \
	I0528 21:31:54.578235 1355836 kubeadm.go:309] 	--control-plane 
	I0528 21:31:54.578243 1355836 kubeadm.go:309] 
	I0528 21:31:54.578327 1355836 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 21:31:54.578340 1355836 kubeadm.go:309] 
	I0528 21:31:54.578432 1355836 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token nlgfel.bhcj8g7dheyimwds \
	I0528 21:31:54.578558 1355836 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dff89f9d96955ea12e5a34678503b154cb1ba84632124852cf6ec75aeb79db1c 
	I0528 21:31:54.582422 1355836 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1062-aws\n", err: exit status 1
	I0528 21:31:54.582552 1355836 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:31:54.582575 1355836 cni.go:84] Creating CNI manager for ""
	I0528 21:31:54.582583 1355836 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0528 21:31:54.586664 1355836 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0528 21:31:54.588997 1355836 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0528 21:31:54.593686 1355836 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0528 21:31:54.593705 1355836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0528 21:31:54.612374 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0528 21:31:54.877216 1355836 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 21:31:54.877289 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:54.877415 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-504712 minikube.k8s.io/updated_at=2024_05_28T21_31_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=addons-504712 minikube.k8s.io/primary=true
	I0528 21:31:55.022813 1355836 ops.go:34] apiserver oom_adj: -16
	I0528 21:31:55.022925 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:55.523579 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:56.023735 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:56.523147 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:57.023068 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:57.523275 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:58.023474 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:58.524041 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:59.023246 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:59.523564 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:00.042514 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:00.523641 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:01.024026 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:01.523947 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:02.023013 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:02.523833 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:03.023667 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:03.523985 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:04.023000 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:04.523753 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:05.023563 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:05.523856 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:06.024051 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:06.523072 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:07.023021 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:07.523007 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:08.023608 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:08.134918 1355836 kubeadm.go:1107] duration metric: took 13.257688459s to wait for elevateKubeSystemPrivileges
	W0528 21:32:08.134948 1355836 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0528 21:32:08.134956 1355836 kubeadm.go:393] duration metric: took 31.657312362s to StartCluster
	I0528 21:32:08.134971 1355836 settings.go:142] acquiring lock: {Name:mk3ead4661b05edfaa64061283a93c6a76969cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:32:08.135569 1355836 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-1349783/kubeconfig
	I0528 21:32:08.136033 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/kubeconfig: {Name:mkaf5e1534f034576a412c2bb12acf3530c82fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:32:08.136234 1355836 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 21:32:08.138377 1355836 out.go:177] * Verifying Kubernetes components...
	I0528 21:32:08.136322 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0528 21:32:08.136483 1355836 config.go:182] Loaded profile config "addons-504712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:32:08.136491 1355836 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0528 21:32:08.140504 1355836 addons.go:69] Setting yakd=true in profile "addons-504712"
	I0528 21:32:08.140524 1355836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:32:08.140537 1355836 addons.go:234] Setting addon yakd=true in "addons-504712"
	I0528 21:32:08.140569 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.140630 1355836 addons.go:69] Setting ingress-dns=true in profile "addons-504712"
	I0528 21:32:08.140672 1355836 addons.go:234] Setting addon ingress-dns=true in "addons-504712"
	I0528 21:32:08.140705 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.141048 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.141140 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.141630 1355836 addons.go:69] Setting inspektor-gadget=true in profile "addons-504712"
	I0528 21:32:08.141660 1355836 addons.go:234] Setting addon inspektor-gadget=true in "addons-504712"
	I0528 21:32:08.141685 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.141745 1355836 addons.go:69] Setting cloud-spanner=true in profile "addons-504712"
	I0528 21:32:08.141767 1355836 addons.go:234] Setting addon cloud-spanner=true in "addons-504712"
	I0528 21:32:08.141787 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.142172 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.142176 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.145662 1355836 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-504712"
	I0528 21:32:08.145737 1355836 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-504712"
	I0528 21:32:08.145770 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.146394 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.152142 1355836 addons.go:69] Setting metrics-server=true in profile "addons-504712"
	I0528 21:32:08.152292 1355836 addons.go:234] Setting addon metrics-server=true in "addons-504712"
	I0528 21:32:08.152361 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.152900 1355836 addons.go:69] Setting default-storageclass=true in profile "addons-504712"
	I0528 21:32:08.152967 1355836 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-504712"
	I0528 21:32:08.153234 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.156396 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.187382 1355836 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-504712"
	I0528 21:32:08.187490 1355836 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-504712"
	I0528 21:32:08.187567 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.187705 1355836 addons.go:69] Setting gcp-auth=true in profile "addons-504712"
	I0528 21:32:08.187884 1355836 addons.go:69] Setting registry=true in profile "addons-504712"
	I0528 21:32:08.187905 1355836 addons.go:234] Setting addon registry=true in "addons-504712"
	I0528 21:32:08.187928 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.188344 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.188503 1355836 mustload.go:65] Loading cluster: addons-504712
	I0528 21:32:08.188702 1355836 config.go:182] Loaded profile config "addons-504712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:32:08.189081 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.212716 1355836 addons.go:69] Setting storage-provisioner=true in profile "addons-504712"
	I0528 21:32:08.212763 1355836 addons.go:234] Setting addon storage-provisioner=true in "addons-504712"
	I0528 21:32:08.212805 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.213238 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.215546 1355836 addons.go:69] Setting ingress=true in profile "addons-504712"
	I0528 21:32:08.215628 1355836 addons.go:234] Setting addon ingress=true in "addons-504712"
	I0528 21:32:08.215701 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.216155 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.226692 1355836 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-504712"
	I0528 21:32:08.226801 1355836 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-504712"
	I0528 21:32:08.227423 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.235485 1355836 addons.go:234] Setting addon default-storageclass=true in "addons-504712"
	I0528 21:32:08.235526 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.235924 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.236234 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.267689 1355836 addons.go:69] Setting volcano=true in profile "addons-504712"
	I0528 21:32:08.267744 1355836 addons.go:234] Setting addon volcano=true in "addons-504712"
	I0528 21:32:08.267784 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.268198 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.289691 1355836 addons.go:69] Setting volumesnapshots=true in profile "addons-504712"
	I0528 21:32:08.289746 1355836 addons.go:234] Setting addon volumesnapshots=true in "addons-504712"
	I0528 21:32:08.289788 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.290261 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.296071 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0528 21:32:08.312662 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0528 21:32:08.321858 1355836 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.1
	I0528 21:32:08.326714 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0528 21:32:08.326783 1355836 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0528 21:32:08.326882 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.326574 1355836 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0528 21:32:08.326580 1355836 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0528 21:32:08.326585 1355836 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0528 21:32:08.326589 1355836 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0528 21:32:08.326643 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.372100 1355836 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0528 21:32:08.376954 1355836 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 21:32:08.379490 1355836 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 21:32:08.380522 1355836 out.go:177]   - Using image docker.io/registry:2.8.3
	I0528 21:32:08.389807 1355836 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0528 21:32:08.386622 1355836 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0528 21:32:08.380531 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	W0528 21:32:08.391432 1355836 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0528 21:32:08.393209 1355836 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-504712"
	I0528 21:32:08.394264 1355836 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0528 21:32:08.394276 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0528 21:32:08.396959 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.397086 1355836 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:32:08.404566 1355836 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 21:32:08.404589 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 21:32:08.404658 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.397277 1355836 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0528 21:32:08.411808 1355836 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0528 21:32:08.411880 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.397285 1355836 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0528 21:32:08.397374 1355836 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0528 21:32:08.397413 1355836 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0528 21:32:08.397456 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.397466 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0528 21:32:08.419020 1355836 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 21:32:08.419212 1355836 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0528 21:32:08.419229 1355836 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0528 21:32:08.419235 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0528 21:32:08.419239 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0528 21:32:08.426211 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.426396 1355836 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 21:32:08.426443 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.429773 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.438968 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0528 21:32:08.436398 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.436433 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.441514 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.467073 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0528 21:32:08.469826 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0528 21:32:08.473159 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0528 21:32:08.478451 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0528 21:32:08.480936 1355836 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0528 21:32:08.480963 1355836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0528 21:32:08.481033 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.517310 1355836 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0528 21:32:08.517332 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0528 21:32:08.517399 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.552447 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.578769 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0528 21:32:08.585652 1355836 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0528 21:32:08.585688 1355836 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0528 21:32:08.585764 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.592179 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.592796 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.600482 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.600820 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.645983 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.674549 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.690388 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.702148 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.716348 1355836 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0528 21:32:08.713576 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.714884 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.720884 1355836 out.go:177]   - Using image docker.io/busybox:stable
	I0528 21:32:08.723196 1355836 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0528 21:32:08.723215 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0528 21:32:08.723280 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.732062 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.752002 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.890961 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0528 21:32:08.891036 1355836 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0528 21:32:08.943611 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 21:32:08.947527 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0528 21:32:08.992656 1355836 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0528 21:32:08.992717 1355836 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0528 21:32:09.006228 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0528 21:32:09.051341 1355836 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0528 21:32:09.051412 1355836 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0528 21:32:09.107775 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0528 21:32:09.107846 1355836 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0528 21:32:09.131110 1355836 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0528 21:32:09.131188 1355836 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0528 21:32:09.144953 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0528 21:32:09.150332 1355836 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.008685325s)
	I0528 21:32:09.150578 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0528 21:32:09.150829 1355836 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.010287934s)
	I0528 21:32:09.150918 1355836 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:32:09.155569 1355836 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0528 21:32:09.155647 1355836 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0528 21:32:09.218307 1355836 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0528 21:32:09.218383 1355836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0528 21:32:09.232545 1355836 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0528 21:32:09.232618 1355836 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0528 21:32:09.236346 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 21:32:09.243878 1355836 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0528 21:32:09.243949 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0528 21:32:09.266615 1355836 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0528 21:32:09.266685 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0528 21:32:09.274215 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0528 21:32:09.301825 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0528 21:32:09.314657 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0528 21:32:09.314717 1355836 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0528 21:32:09.340791 1355836 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0528 21:32:09.340866 1355836 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0528 21:32:09.439168 1355836 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0528 21:32:09.439190 1355836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0528 21:32:09.484833 1355836 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0528 21:32:09.484853 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0528 21:32:09.496990 1355836 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0528 21:32:09.497019 1355836 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0528 21:32:09.528527 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0528 21:32:09.531963 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0528 21:32:09.532040 1355836 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0528 21:32:09.605260 1355836 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0528 21:32:09.605331 1355836 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0528 21:32:09.636231 1355836 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0528 21:32:09.636311 1355836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0528 21:32:09.723955 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0528 21:32:09.730267 1355836 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 21:32:09.730337 1355836 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0528 21:32:09.748541 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0528 21:32:09.748617 1355836 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0528 21:32:09.810719 1355836 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0528 21:32:09.810789 1355836 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0528 21:32:09.843781 1355836 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0528 21:32:09.843854 1355836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0528 21:32:09.948072 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 21:32:09.969720 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0528 21:32:09.969791 1355836 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0528 21:32:09.974277 1355836 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0528 21:32:09.974345 1355836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0528 21:32:09.977545 1355836 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 21:32:09.977640 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0528 21:32:10.086243 1355836 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0528 21:32:10.086334 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0528 21:32:10.089627 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 21:32:10.137647 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0528 21:32:10.137711 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0528 21:32:10.211945 1355836 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0528 21:32:10.212015 1355836 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0528 21:32:10.278145 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0528 21:32:10.319675 1355836 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0528 21:32:10.319747 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0528 21:32:10.422775 1355836 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0528 21:32:10.422849 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0528 21:32:10.483220 1355836 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0528 21:32:10.483290 1355836 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0528 21:32:10.503317 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0528 21:32:13.749217 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.805504042s)
	I0528 21:32:13.749282 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.801681082s)
	I0528 21:32:13.749315 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.743018111s)
	I0528 21:32:14.774198 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.629159516s)
	I0528 21:32:14.774244 1355836 addons.go:475] Verifying addon ingress=true in "addons-504712"
	I0528 21:32:14.776600 1355836 out.go:177] * Verifying ingress addon...
	I0528 21:32:14.774470 1355836 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.623856224s)
	I0528 21:32:14.774487 1355836 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.62354689s)
	I0528 21:32:14.774517 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.538112697s)
	I0528 21:32:14.774536 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.500262016s)
	I0528 21:32:14.774574 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.472684045s)
	I0528 21:32:14.774604 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.246009632s)
	I0528 21:32:14.774630 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.050607423s)
	I0528 21:32:14.774708 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.826555614s)
	I0528 21:32:14.778562 1355836 addons.go:475] Verifying addon metrics-server=true in "addons-504712"
	I0528 21:32:14.779414 1355836 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0528 21:32:14.779591 1355836 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0528 21:32:14.780939 1355836 node_ready.go:35] waiting up to 6m0s for node "addons-504712" to be "Ready" ...
	I0528 21:32:14.783216 1355836 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-504712 service yakd-dashboard -n yakd-dashboard
	
	I0528 21:32:14.781345 1355836 addons.go:475] Verifying addon registry=true in "addons-504712"
	I0528 21:32:14.785681 1355836 out.go:177] * Verifying registry addon...
	I0528 21:32:14.788917 1355836 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0528 21:32:14.794327 1355836 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0528 21:32:14.794355 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:14.823944 1355836 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0528 21:32:14.823966 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0528 21:32:14.824124 1355836 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0528 21:32:14.876326 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.786597968s)
	W0528 21:32:14.876380 1355836 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0528 21:32:14.876400 1355836 retry.go:31] will retry after 363.490839ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0528 21:32:14.876475 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.598257267s)
	I0528 21:32:15.125247 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.621820192s)
	I0528 21:32:15.125289 1355836 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-504712"
	I0528 21:32:15.128289 1355836 out.go:177] * Verifying csi-hostpath-driver addon...
	I0528 21:32:15.131579 1355836 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0528 21:32:15.145505 1355836 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0528 21:32:15.145542 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:15.240685 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 21:32:15.284266 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:15.285190 1355836 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-504712" context rescaled to 1 replicas
	I0528 21:32:15.293510 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:15.636651 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:15.785414 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:15.793579 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:15.850284 1355836 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0528 21:32:15.850367 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:15.865715 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:15.968508 1355836 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0528 21:32:15.988453 1355836 addons.go:234] Setting addon gcp-auth=true in "addons-504712"
	I0528 21:32:15.988505 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:15.988944 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:16.009784 1355836 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0528 21:32:16.009841 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:16.029793 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:16.136197 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:16.283728 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:16.293026 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:16.637597 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:16.784526 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:16.787473 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:16.793198 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:17.136371 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:17.284722 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:17.304643 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:17.655401 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:17.819102 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:17.820596 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:18.093319 1355836 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.083503029s)
	I0528 21:32:18.095807 1355836 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 21:32:18.093572 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.852847101s)
	I0528 21:32:18.097665 1355836 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0528 21:32:18.099303 1355836 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0528 21:32:18.099323 1355836 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0528 21:32:18.136594 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:18.155878 1355836 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0528 21:32:18.155906 1355836 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0528 21:32:18.182212 1355836 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0528 21:32:18.182233 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0528 21:32:18.212564 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0528 21:32:18.283672 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:18.302234 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:18.635679 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:18.793788 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:18.794234 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:18.804984 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:18.925315 1355836 addons.go:475] Verifying addon gcp-auth=true in "addons-504712"
	I0528 21:32:18.927988 1355836 out.go:177] * Verifying gcp-auth addon...
	I0528 21:32:18.931332 1355836 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0528 21:32:18.937813 1355836 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0528 21:32:18.937833 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:19.136007 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:19.287381 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:19.292955 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:19.436756 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:19.636093 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:19.783410 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:19.793192 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:19.937436 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:20.138337 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:20.284850 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:20.293224 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:20.435140 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:20.635845 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:20.791712 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:20.799219 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:20.935243 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:21.136135 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:21.284298 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:21.284714 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:21.292939 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:21.435558 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:21.638755 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:21.784447 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:21.792810 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:21.935297 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:22.138113 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:22.287006 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:22.293292 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:22.435366 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:22.636105 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:22.783747 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:22.793542 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:22.935578 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:23.136636 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:23.284519 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:23.285513 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:23.293525 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:23.435589 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:23.635759 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:23.784367 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:23.792734 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:23.935172 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:24.136920 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:24.283593 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:24.293082 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:24.436476 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:24.637023 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:24.786319 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:24.793318 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:24.935728 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:25.139711 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:25.284902 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:25.285395 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:25.292499 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:25.435691 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:25.636474 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:25.783514 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:25.793804 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:25.935157 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:26.136510 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:26.283545 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:26.292909 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:26.435000 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:26.635724 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:26.785811 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:26.792916 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:26.934909 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:27.135504 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:27.284299 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:27.294333 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:27.435492 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:27.635906 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:27.783468 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:27.785261 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:27.792843 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:27.935049 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:28.136225 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:28.283359 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:28.292763 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:28.435251 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:28.636393 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:28.783911 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:28.793793 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:28.934750 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:29.136102 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:29.283804 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:29.292957 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:29.435413 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:29.636278 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:29.784085 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:29.785964 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:29.793205 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:29.935179 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:30.137226 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:30.284541 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:30.293743 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:30.435317 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:30.636756 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:30.783862 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:30.793241 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:30.935146 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:31.136043 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:31.283590 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:31.293074 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:31.435050 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:31.635984 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:31.784669 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:31.792844 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:31.935276 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:32.136465 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:32.283539 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:32.284797 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:32.294005 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:32.435071 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:32.637705 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:32.784257 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:32.793600 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:32.937064 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:33.135803 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:33.285354 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:33.294148 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:33.435217 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:33.635851 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:33.785811 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:33.792704 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:33.935122 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:34.136533 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:34.283481 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:34.285180 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:34.292987 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:34.435061 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:34.636276 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:34.783778 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:34.793177 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:34.935366 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:35.137225 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:35.283127 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:35.293028 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:35.435362 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:35.636378 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:35.784186 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:35.792865 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:35.934811 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:36.135963 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:36.284939 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:36.285486 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:36.292667 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:36.434783 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:36.635784 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:36.784751 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:36.792757 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:36.934570 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:37.136592 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:37.284893 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:37.293378 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:37.435190 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:37.641465 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:37.786198 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:37.792891 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:37.935019 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:38.135663 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:38.283170 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:38.292816 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:38.434778 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:38.635992 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:38.783491 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:38.785056 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:38.793136 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:38.935675 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:39.135881 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:39.283877 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:39.292897 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:39.435113 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:39.638407 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:39.783984 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:39.792731 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:39.936982 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:40.137497 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:40.285144 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:40.295044 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:40.437787 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:40.636728 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:40.783677 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:40.792523 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:40.934618 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:41.145328 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:41.284389 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:41.284816 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:41.293260 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:41.435763 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:41.636312 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:41.783809 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:41.792868 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:41.934866 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:42.145534 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:42.284629 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:42.294189 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:42.435051 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:42.636278 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:42.784144 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:42.793388 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:42.935376 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:43.137529 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:43.283978 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:43.284449 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:43.293319 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:43.435355 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:43.636691 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:43.805441 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:43.805978 1355836 node_ready.go:49] node "addons-504712" has status "Ready":"True"
	I0528 21:32:43.806000 1355836 node_ready.go:38] duration metric: took 29.025038928s for node "addons-504712" to be "Ready" ...
	I0528 21:32:43.806009 1355836 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:32:43.810514 1355836 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0528 21:32:43.810544 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:43.837519 1355836 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5qs7n" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:43.945879 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:44.160134 1355836 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0528 21:32:44.160163 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:44.407117 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:44.412496 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:44.459790 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:44.645908 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:44.787656 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:44.795256 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:44.936254 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:45.151954 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:45.286448 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:45.296321 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:45.348718 1355836 pod_ready.go:92] pod "coredns-7db6d8ff4d-5qs7n" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:45.348746 1355836 pod_ready.go:81] duration metric: took 1.511192536s for pod "coredns-7db6d8ff4d-5qs7n" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.348766 1355836 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.357636 1355836 pod_ready.go:92] pod "etcd-addons-504712" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:45.357663 1355836 pod_ready.go:81] duration metric: took 8.888211ms for pod "etcd-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.357679 1355836 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.365407 1355836 pod_ready.go:92] pod "kube-apiserver-addons-504712" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:45.365435 1355836 pod_ready.go:81] duration metric: took 7.747924ms for pod "kube-apiserver-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.365448 1355836 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.371850 1355836 pod_ready.go:92] pod "kube-controller-manager-addons-504712" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:45.371879 1355836 pod_ready.go:81] duration metric: took 6.423149ms for pod "kube-controller-manager-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.371894 1355836 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kdmkz" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.385799 1355836 pod_ready.go:92] pod "kube-proxy-kdmkz" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:45.385825 1355836 pod_ready.go:81] duration metric: took 13.923415ms for pod "kube-proxy-kdmkz" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.385838 1355836 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.435806 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:45.636944 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:45.785027 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:45.786329 1355836 pod_ready.go:92] pod "kube-scheduler-addons-504712" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:45.786353 1355836 pod_ready.go:81] duration metric: took 400.506788ms for pod "kube-scheduler-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.786365 1355836 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.794138 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:45.936054 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:46.137635 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:46.284192 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:46.293796 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:46.435774 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:46.640536 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:46.784807 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:46.793980 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:46.936463 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:47.137831 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:47.284436 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:47.294369 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:47.437152 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:47.640920 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:47.785859 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:47.796210 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:47.799205 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:47.944611 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:48.137546 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:48.284940 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:48.303874 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:48.437960 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:48.640364 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:48.787072 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:48.810622 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:48.936158 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:49.144863 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:49.306461 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:49.307759 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:49.436778 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:49.639260 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:49.784424 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:49.802570 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:49.807418 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:49.934984 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:50.139586 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:50.284349 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:50.307340 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:50.437580 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:50.639623 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:50.787713 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:50.796683 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:50.939020 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:51.137537 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:51.283686 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:51.294529 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:51.434891 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:51.636924 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:51.784150 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:51.796762 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:51.935658 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:52.138533 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:52.284790 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:52.308280 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:52.308625 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:52.435217 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:52.649901 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:52.784528 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:52.803805 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:52.935952 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:53.138309 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:53.284266 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:53.316830 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:53.435540 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:53.637264 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:53.786293 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:53.815588 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:53.936195 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:54.143110 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:54.284498 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:54.297377 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:54.435944 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:54.639239 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:54.785544 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:54.798928 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:54.799614 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:54.935609 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:55.138436 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:55.285254 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:55.306802 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:55.435594 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:55.638169 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:55.784653 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:55.796742 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:55.934999 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:56.137457 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:56.284670 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:56.298013 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:56.435854 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:56.637236 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:56.784031 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:56.795320 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:56.937019 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:57.137798 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:57.284872 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:57.293193 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:57.297469 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:57.435329 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:57.637621 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:57.784349 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:57.796807 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:57.935442 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:58.137170 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:58.303260 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:58.309039 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:58.436439 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:58.636834 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:58.783635 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:58.792725 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:58.934663 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:59.139933 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:59.284015 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:59.294197 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:59.296695 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:59.450186 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:59.636848 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:59.784253 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:59.798756 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:59.935513 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:00.156840 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:00.286987 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:00.302884 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:00.442819 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:00.637123 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:00.785234 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:00.794928 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:00.935736 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:01.137257 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:01.283371 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:01.294330 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:01.435500 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:01.638226 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:01.784393 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:01.794172 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:01.796817 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:01.935352 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:02.158214 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:02.312610 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:02.323765 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:02.436758 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:02.640826 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:02.784464 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:02.796422 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:02.935574 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:03.137995 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:03.286495 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:03.308515 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:03.437185 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:03.638888 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:03.784508 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:03.797252 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:03.936226 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:04.154473 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:04.284371 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:04.298655 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:04.303305 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:04.435420 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:04.638067 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:04.805936 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:04.806533 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:04.936008 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:05.137509 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:05.283814 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:05.296911 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:05.435103 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:05.639468 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:05.785056 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:05.802416 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:05.935387 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:06.137920 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:06.284421 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:06.294198 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:06.437394 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:06.638216 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:06.784334 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:06.792802 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:06.793529 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:06.935668 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:07.137437 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:07.286573 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:07.302608 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:07.436560 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:07.638208 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:07.783435 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:07.801421 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:07.935584 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:08.137542 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:08.284113 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:08.294533 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:08.435298 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:08.637749 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:08.783761 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:08.795830 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:08.935307 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:09.146844 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:09.284545 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:09.325135 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:09.326340 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:09.435369 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:09.637212 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:09.785513 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:09.796498 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:09.936762 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:10.139025 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:10.284142 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:10.295795 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:10.435374 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:10.638546 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:10.783789 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:10.795431 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:10.935157 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:11.137662 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:11.283905 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:11.294511 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:11.435523 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:11.639439 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:11.783773 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:11.794708 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:11.795044 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:11.937919 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:12.137073 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:12.284601 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:12.294292 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:12.435270 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:12.637259 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:12.785021 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:12.800264 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:12.937569 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:13.139493 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:13.284448 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:13.311679 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:13.435952 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:13.638741 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:13.796094 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:13.801471 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:13.804350 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:13.935407 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:14.137192 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:14.288796 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:14.306339 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:14.437721 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:14.640338 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:14.785812 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:14.794566 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:14.936772 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:15.137925 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:15.285668 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:15.297078 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:15.435790 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:15.637702 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:15.785824 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:15.804144 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:15.937733 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:16.139363 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:16.283578 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:16.294115 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:16.307693 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:16.435210 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:16.637891 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:16.786183 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:16.794550 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:16.934478 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:17.137271 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:17.283689 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:17.294473 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:17.435349 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:17.637218 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:17.785974 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:17.799603 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:17.935685 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:18.137212 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:18.284878 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:18.297436 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:18.437667 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:18.639140 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:18.784626 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:18.792788 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:18.794196 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:18.934601 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:19.137858 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:19.284118 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:19.295132 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:19.436802 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:19.637699 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:19.783763 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:19.797718 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:19.940112 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:20.138599 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:20.284941 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:20.294723 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:20.435481 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:20.638176 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:20.784357 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:20.794406 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:20.940879 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:21.145696 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:21.284610 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:21.295320 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:21.295988 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:21.435630 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:21.637863 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:21.784720 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:21.797938 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:21.935521 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:22.137563 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:22.297003 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:22.317072 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:22.435879 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:22.637809 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:22.784794 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:22.794584 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:22.935586 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:23.139059 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:23.285491 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:23.312511 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:23.314120 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:23.436018 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:23.638479 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:23.786334 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:23.795877 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:23.935384 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:24.139316 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:24.285890 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:24.313703 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:24.435565 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:24.637263 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:24.784044 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:24.802334 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:24.938932 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:25.153188 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:25.284260 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:25.297444 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:25.435031 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:25.637380 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:25.784543 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:25.793943 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:25.794756 1355836 kapi.go:107] duration metric: took 1m11.005836437s to wait for kubernetes.io/minikube-addons=registry ...
	I0528 21:33:25.941894 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:26.136992 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:26.284036 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:26.435632 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:26.638231 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:26.785492 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:26.936384 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:27.137516 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:27.284063 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:27.434834 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:27.639791 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:27.784609 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:27.795721 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:27.943235 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:28.138645 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:28.286879 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:28.436993 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:28.638346 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:28.786900 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:28.937382 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:29.154481 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:29.286623 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:29.439255 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:29.637205 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:29.784651 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:29.935179 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:30.139090 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:30.284093 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:30.293015 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:30.436182 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:30.638009 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:30.785025 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:30.935751 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:31.138182 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:31.285537 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:31.437169 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:31.637337 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:31.784967 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:31.935097 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:32.136919 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:32.283934 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:32.300284 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:32.441778 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:32.637266 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:32.784175 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:32.934968 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:33.136958 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:33.284293 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:33.434797 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:33.637600 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:33.783721 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:33.936697 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:34.136838 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:34.284932 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:34.435322 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:34.638155 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:34.790009 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:34.796864 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:34.934826 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:35.137945 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:35.288361 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:35.435174 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:35.637879 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:35.784093 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:35.935553 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:36.138493 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:36.284182 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:36.435295 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:36.638340 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:36.784794 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:36.936200 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:37.137452 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:37.284426 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:37.291604 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:37.435417 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:37.637523 1355836 kapi.go:107] duration metric: took 1m22.505939725s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0528 21:33:37.784443 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:37.935551 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:38.283803 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:38.435688 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:38.783455 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:38.935594 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:39.284190 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:39.292911 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:39.436686 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:39.783577 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:39.938183 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:40.283737 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:40.435929 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:40.784852 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:40.936024 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:41.284094 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:41.434840 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:41.784494 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:41.792354 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:41.935317 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:42.285081 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:42.435487 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:42.784302 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:42.935224 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:43.283873 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:43.435374 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:43.784505 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:43.934653 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:44.284206 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:44.293041 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:44.436123 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:44.784674 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:44.935215 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:45.284869 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:45.436205 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:45.784402 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:45.934609 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:46.283919 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:46.293154 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:46.435561 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:46.784806 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:46.939046 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:47.283333 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:47.435018 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:47.784564 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:47.935237 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:48.284605 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:48.435037 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:48.783531 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:48.792346 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:48.935287 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:49.284339 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:49.435642 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:49.794352 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:49.935785 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:50.285042 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:50.435671 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:50.785424 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:50.797045 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:50.936217 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:51.284546 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:51.437445 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:51.785547 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:51.936888 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:52.283737 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:52.435547 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:52.784011 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:52.946010 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:53.283581 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:53.298545 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:53.435412 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:53.785329 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:53.935677 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:54.292517 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:54.308565 1355836 pod_ready.go:92] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"True"
	I0528 21:33:54.308637 1355836 pod_ready.go:81] duration metric: took 1m8.522262577s for pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace to be "Ready" ...
	I0528 21:33:54.308665 1355836 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-p6z9d" in "kube-system" namespace to be "Ready" ...
	I0528 21:33:54.317195 1355836 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-p6z9d" in "kube-system" namespace has status "Ready":"True"
	I0528 21:33:54.317266 1355836 pod_ready.go:81] duration metric: took 8.58069ms for pod "nvidia-device-plugin-daemonset-p6z9d" in "kube-system" namespace to be "Ready" ...
	I0528 21:33:54.317303 1355836 pod_ready.go:38] duration metric: took 1m10.51126126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:33:54.317344 1355836 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:33:54.317389 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:33:54.317472 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:33:54.371255 1355836 cri.go:89] found id: "e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81"
	I0528 21:33:54.371279 1355836 cri.go:89] found id: ""
	I0528 21:33:54.371287 1355836 logs.go:276] 1 containers: [e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81]
	I0528 21:33:54.371342 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:33:54.374843 1355836 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:33:54.374926 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:33:54.438938 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:54.452048 1355836 cri.go:89] found id: "3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2"
	I0528 21:33:54.452071 1355836 cri.go:89] found id: ""
	I0528 21:33:54.452080 1355836 logs.go:276] 1 containers: [3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2]
	I0528 21:33:54.452133 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:33:54.456003 1355836 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:33:54.456073 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:33:54.505220 1355836 cri.go:89] found id: "0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15"
	I0528 21:33:54.505250 1355836 cri.go:89] found id: ""
	I0528 21:33:54.505257 1355836 logs.go:276] 1 containers: [0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15]
	I0528 21:33:54.505319 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:33:54.509052 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:33:54.509122 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:33:54.555663 1355836 cri.go:89] found id: "5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c"
	I0528 21:33:54.555736 1355836 cri.go:89] found id: ""
	I0528 21:33:54.555759 1355836 logs.go:276] 1 containers: [5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c]
	I0528 21:33:54.555843 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:33:54.560074 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:33:54.560198 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:33:54.608141 1355836 cri.go:89] found id: "a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb"
	I0528 21:33:54.608225 1355836 cri.go:89] found id: ""
	I0528 21:33:54.608247 1355836 logs.go:276] 1 containers: [a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb]
	I0528 21:33:54.608347 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:33:54.613242 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:33:54.613397 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:33:54.666134 1355836 cri.go:89] found id: "caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8"
	I0528 21:33:54.666223 1355836 cri.go:89] found id: ""
	I0528 21:33:54.666247 1355836 logs.go:276] 1 containers: [caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8]
	I0528 21:33:54.666342 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:33:54.672415 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:33:54.672580 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:33:54.722230 1355836 cri.go:89] found id: "b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a"
	I0528 21:33:54.722262 1355836 cri.go:89] found id: ""
	I0528 21:33:54.722270 1355836 logs.go:276] 1 containers: [b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a]
	I0528 21:33:54.722334 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:33:54.727533 1355836 logs.go:123] Gathering logs for kube-apiserver [e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81] ...
	I0528 21:33:54.727560 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81"
	I0528 21:33:54.796601 1355836 kapi.go:107] duration metric: took 1m40.017182978s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0528 21:33:54.800735 1355836 logs.go:123] Gathering logs for kube-scheduler [5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c] ...
	I0528 21:33:54.800757 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c"
	I0528 21:33:54.851302 1355836 logs.go:123] Gathering logs for kube-proxy [a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb] ...
	I0528 21:33:54.851342 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb"
	I0528 21:33:54.888344 1355836 logs.go:123] Gathering logs for kindnet [b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a] ...
	I0528 21:33:54.888371 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a"
	I0528 21:33:54.932069 1355836 logs.go:123] Gathering logs for container status ...
	I0528 21:33:54.932095 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:33:54.936002 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:54.989493 1355836 logs.go:123] Gathering logs for kube-controller-manager [caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8] ...
	I0528 21:33:54.989527 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8"
	I0528 21:33:55.074331 1355836 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:33:55.074368 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:33:55.192611 1355836 logs.go:123] Gathering logs for kubelet ...
	I0528 21:33:55.192654 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 21:33:55.231788 1355836 logs.go:138] Found kubelet problem: May 28 21:32:43 addons-504712 kubelet[1526]: W0528 21:32:43.750886    1526 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	W0528 21:33:55.232004 1355836 logs.go:138] Found kubelet problem: May 28 21:32:43 addons-504712 kubelet[1526]: E0528 21:32:43.750926    1526 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	I0528 21:33:55.278589 1355836 logs.go:123] Gathering logs for dmesg ...
	I0528 21:33:55.278623 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:33:55.297947 1355836 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:33:55.297985 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:33:55.439755 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:55.482367 1355836 logs.go:123] Gathering logs for etcd [3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2] ...
	I0528 21:33:55.482449 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2"
	I0528 21:33:55.561175 1355836 logs.go:123] Gathering logs for coredns [0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15] ...
	I0528 21:33:55.561208 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15"
	I0528 21:33:55.615990 1355836 out.go:304] Setting ErrFile to fd 2...
	I0528 21:33:55.616022 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 21:33:55.616076 1355836 out.go:239] X Problems detected in kubelet:
	W0528 21:33:55.616090 1355836 out.go:239]   May 28 21:32:43 addons-504712 kubelet[1526]: W0528 21:32:43.750886    1526 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	W0528 21:33:55.616097 1355836 out.go:239]   May 28 21:32:43 addons-504712 kubelet[1526]: E0528 21:32:43.750926    1526 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	I0528 21:33:55.616109 1355836 out.go:304] Setting ErrFile to fd 2...
	I0528 21:33:55.616115 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:33:55.935702 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:56.438724 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:56.934646 1355836 kapi.go:107] duration metric: took 1m38.003309418s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0528 21:33:56.937270 1355836 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-504712 cluster.
	I0528 21:33:56.939637 1355836 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0528 21:33:56.941662 1355836 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0528 21:33:56.944096 1355836 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, metrics-server, nvidia-device-plugin, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0528 21:33:56.946229 1355836 addons.go:510] duration metric: took 1m48.809728539s for enable addons: enabled=[storage-provisioner cloud-spanner ingress-dns metrics-server nvidia-device-plugin yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0528 21:34:05.616886 1355836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:34:05.630059 1355836 api_server.go:72] duration metric: took 1m57.49379782s to wait for apiserver process to appear ...
	I0528 21:34:05.630088 1355836 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:34:05.630121 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:34:05.630181 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:34:05.667264 1355836 cri.go:89] found id: "e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81"
	I0528 21:34:05.667285 1355836 cri.go:89] found id: ""
	I0528 21:34:05.667293 1355836 logs.go:276] 1 containers: [e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81]
	I0528 21:34:05.667351 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:05.670776 1355836 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:34:05.670846 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:34:05.712939 1355836 cri.go:89] found id: "3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2"
	I0528 21:34:05.712964 1355836 cri.go:89] found id: ""
	I0528 21:34:05.712972 1355836 logs.go:276] 1 containers: [3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2]
	I0528 21:34:05.713028 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:05.716770 1355836 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:34:05.716845 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:34:05.757263 1355836 cri.go:89] found id: "0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15"
	I0528 21:34:05.757285 1355836 cri.go:89] found id: ""
	I0528 21:34:05.757293 1355836 logs.go:276] 1 containers: [0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15]
	I0528 21:34:05.757347 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:05.760708 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:34:05.760774 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:34:05.799571 1355836 cri.go:89] found id: "5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c"
	I0528 21:34:05.799592 1355836 cri.go:89] found id: ""
	I0528 21:34:05.799601 1355836 logs.go:276] 1 containers: [5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c]
	I0528 21:34:05.799660 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:05.803317 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:34:05.803390 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:34:05.847332 1355836 cri.go:89] found id: "a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb"
	I0528 21:34:05.847352 1355836 cri.go:89] found id: ""
	I0528 21:34:05.847360 1355836 logs.go:276] 1 containers: [a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb]
	I0528 21:34:05.847415 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:05.850826 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:34:05.850902 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:34:05.890416 1355836 cri.go:89] found id: "caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8"
	I0528 21:34:05.890443 1355836 cri.go:89] found id: ""
	I0528 21:34:05.890451 1355836 logs.go:276] 1 containers: [caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8]
	I0528 21:34:05.890510 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:05.893871 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:34:05.893943 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:34:05.936497 1355836 cri.go:89] found id: "b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a"
	I0528 21:34:05.936521 1355836 cri.go:89] found id: ""
	I0528 21:34:05.936529 1355836 logs.go:276] 1 containers: [b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a]
	I0528 21:34:05.936586 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:05.940213 1355836 logs.go:123] Gathering logs for dmesg ...
	I0528 21:34:05.940247 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:34:05.960261 1355836 logs.go:123] Gathering logs for kube-apiserver [e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81] ...
	I0528 21:34:05.960290 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81"
	I0528 21:34:06.018297 1355836 logs.go:123] Gathering logs for kube-controller-manager [caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8] ...
	I0528 21:34:06.018334 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8"
	I0528 21:34:06.104191 1355836 logs.go:123] Gathering logs for container status ...
	I0528 21:34:06.104226 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:34:06.155670 1355836 logs.go:123] Gathering logs for kubelet ...
	I0528 21:34:06.155702 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 21:34:06.194482 1355836 logs.go:138] Found kubelet problem: May 28 21:32:43 addons-504712 kubelet[1526]: W0528 21:32:43.750886    1526 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	W0528 21:34:06.194698 1355836 logs.go:138] Found kubelet problem: May 28 21:32:43 addons-504712 kubelet[1526]: E0528 21:32:43.750926    1526 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	I0528 21:34:06.242820 1355836 logs.go:123] Gathering logs for etcd [3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2] ...
	I0528 21:34:06.242856 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2"
	I0528 21:34:06.298617 1355836 logs.go:123] Gathering logs for coredns [0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15] ...
	I0528 21:34:06.298648 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15"
	I0528 21:34:06.340104 1355836 logs.go:123] Gathering logs for kube-scheduler [5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c] ...
	I0528 21:34:06.340134 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c"
	I0528 21:34:06.387329 1355836 logs.go:123] Gathering logs for kube-proxy [a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb] ...
	I0528 21:34:06.387357 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb"
	I0528 21:34:06.424482 1355836 logs.go:123] Gathering logs for kindnet [b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a] ...
	I0528 21:34:06.424524 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a"
	I0528 21:34:06.471224 1355836 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:34:06.471254 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:34:06.566443 1355836 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:34:06.566518 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:34:06.705996 1355836 out.go:304] Setting ErrFile to fd 2...
	I0528 21:34:06.706099 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 21:34:06.706185 1355836 out.go:239] X Problems detected in kubelet:
	W0528 21:34:06.706225 1355836 out.go:239]   May 28 21:32:43 addons-504712 kubelet[1526]: W0528 21:32:43.750886    1526 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	W0528 21:34:06.706256 1355836 out.go:239]   May 28 21:32:43 addons-504712 kubelet[1526]: E0528 21:32:43.750926    1526 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	I0528 21:34:06.706301 1355836 out.go:304] Setting ErrFile to fd 2...
	I0528 21:34:06.706320 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:34:16.707164 1355836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0528 21:34:16.714645 1355836 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0528 21:34:16.715609 1355836 api_server.go:141] control plane version: v1.30.1
	I0528 21:34:16.715636 1355836 api_server.go:131] duration metric: took 11.08553971s to wait for apiserver health ...
	I0528 21:34:16.715645 1355836 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:34:16.715665 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:34:16.715726 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:34:16.755795 1355836 cri.go:89] found id: "e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81"
	I0528 21:34:16.755817 1355836 cri.go:89] found id: ""
	I0528 21:34:16.755825 1355836 logs.go:276] 1 containers: [e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81]
	I0528 21:34:16.755901 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:16.759299 1355836 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:34:16.759369 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:34:16.801408 1355836 cri.go:89] found id: "3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2"
	I0528 21:34:16.801430 1355836 cri.go:89] found id: ""
	I0528 21:34:16.801438 1355836 logs.go:276] 1 containers: [3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2]
	I0528 21:34:16.801495 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:16.805484 1355836 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:34:16.805558 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:34:16.856090 1355836 cri.go:89] found id: "0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15"
	I0528 21:34:16.856113 1355836 cri.go:89] found id: ""
	I0528 21:34:16.856121 1355836 logs.go:276] 1 containers: [0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15]
	I0528 21:34:16.856181 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:16.859858 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:34:16.859931 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:34:16.898806 1355836 cri.go:89] found id: "5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c"
	I0528 21:34:16.898830 1355836 cri.go:89] found id: ""
	I0528 21:34:16.898837 1355836 logs.go:276] 1 containers: [5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c]
	I0528 21:34:16.898900 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:16.902333 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:34:16.902402 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:34:16.941755 1355836 cri.go:89] found id: "a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb"
	I0528 21:34:16.941779 1355836 cri.go:89] found id: ""
	I0528 21:34:16.941787 1355836 logs.go:276] 1 containers: [a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb]
	I0528 21:34:16.941839 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:16.945208 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:34:16.945280 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:34:16.983476 1355836 cri.go:89] found id: "caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8"
	I0528 21:34:16.983498 1355836 cri.go:89] found id: ""
	I0528 21:34:16.983506 1355836 logs.go:276] 1 containers: [caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8]
	I0528 21:34:16.983560 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:16.987701 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:34:16.987792 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:34:17.030638 1355836 cri.go:89] found id: "b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a"
	I0528 21:34:17.030661 1355836 cri.go:89] found id: ""
	I0528 21:34:17.030668 1355836 logs.go:276] 1 containers: [b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a]
	I0528 21:34:17.030754 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:17.034223 1355836 logs.go:123] Gathering logs for kubelet ...
	I0528 21:34:17.034246 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 21:34:17.073050 1355836 logs.go:138] Found kubelet problem: May 28 21:32:43 addons-504712 kubelet[1526]: W0528 21:32:43.750886    1526 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	W0528 21:34:17.073261 1355836 logs.go:138] Found kubelet problem: May 28 21:32:43 addons-504712 kubelet[1526]: E0528 21:32:43.750926    1526 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	I0528 21:34:17.123432 1355836 logs.go:123] Gathering logs for dmesg ...
	I0528 21:34:17.123471 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:34:17.142282 1355836 logs.go:123] Gathering logs for etcd [3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2] ...
	I0528 21:34:17.142310 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2"
	I0528 21:34:17.191754 1355836 logs.go:123] Gathering logs for kube-scheduler [5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c] ...
	I0528 21:34:17.191786 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c"
	I0528 21:34:17.236288 1355836 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:34:17.236318 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:34:17.332274 1355836 logs.go:123] Gathering logs for container status ...
	I0528 21:34:17.332312 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:34:17.379855 1355836 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:34:17.379889 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:34:17.516502 1355836 logs.go:123] Gathering logs for kube-apiserver [e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81] ...
	I0528 21:34:17.516529 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81"
	I0528 21:34:17.585125 1355836 logs.go:123] Gathering logs for coredns [0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15] ...
	I0528 21:34:17.585156 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15"
	I0528 21:34:17.628067 1355836 logs.go:123] Gathering logs for kube-proxy [a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb] ...
	I0528 21:34:17.628096 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb"
	I0528 21:34:17.667467 1355836 logs.go:123] Gathering logs for kube-controller-manager [caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8] ...
	I0528 21:34:17.667496 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8"
	I0528 21:34:17.755784 1355836 logs.go:123] Gathering logs for kindnet [b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a] ...
	I0528 21:34:17.755820 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a"
	I0528 21:34:17.798516 1355836 out.go:304] Setting ErrFile to fd 2...
	I0528 21:34:17.798541 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 21:34:17.798596 1355836 out.go:239] X Problems detected in kubelet:
	W0528 21:34:17.798607 1355836 out.go:239]   May 28 21:32:43 addons-504712 kubelet[1526]: W0528 21:32:43.750886    1526 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	W0528 21:34:17.798614 1355836 out.go:239]   May 28 21:32:43 addons-504712 kubelet[1526]: E0528 21:32:43.750926    1526 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	I0528 21:34:17.798627 1355836 out.go:304] Setting ErrFile to fd 2...
	I0528 21:34:17.798635 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:34:27.809248 1355836 system_pods.go:59] 18 kube-system pods found
	I0528 21:34:27.809288 1355836 system_pods.go:61] "coredns-7db6d8ff4d-5qs7n" [123e6e9f-938c-4637-9df3-48445a053447] Running
	I0528 21:34:27.809294 1355836 system_pods.go:61] "csi-hostpath-attacher-0" [03affddc-37af-40ff-91d0-201caebcf9d4] Running
	I0528 21:34:27.809298 1355836 system_pods.go:61] "csi-hostpath-resizer-0" [9a45d0fd-aba5-4709-b9cd-9bdc1a3ae6d2] Running
	I0528 21:34:27.809302 1355836 system_pods.go:61] "csi-hostpathplugin-whvsm" [a24550b9-f416-492b-aec0-fb3a0247163d] Running
	I0528 21:34:27.809306 1355836 system_pods.go:61] "etcd-addons-504712" [14693a74-3f6d-434a-9659-b8117a7f4cfe] Running
	I0528 21:34:27.809311 1355836 system_pods.go:61] "kindnet-h8d66" [a1157f9e-ea43-46f3-bc60-a3f92737ea52] Running
	I0528 21:34:27.809316 1355836 system_pods.go:61] "kube-apiserver-addons-504712" [f56fa365-330a-4949-acba-efa405848af8] Running
	I0528 21:34:27.809320 1355836 system_pods.go:61] "kube-controller-manager-addons-504712" [8f1a9ee6-4cbc-4605-9900-707045250fa5] Running
	I0528 21:34:27.809328 1355836 system_pods.go:61] "kube-ingress-dns-minikube" [f992c4bf-c862-45ab-bbb9-bc45aa22a765] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0528 21:34:27.809333 1355836 system_pods.go:61] "kube-proxy-kdmkz" [6d9390b9-56ba-40e3-80d9-68427b904453] Running
	I0528 21:34:27.809345 1355836 system_pods.go:61] "kube-scheduler-addons-504712" [fd2276ce-151c-41df-8b1c-a8ec138481f9] Running
	I0528 21:34:27.809350 1355836 system_pods.go:61] "metrics-server-c59844bb4-99j6d" [6c20ca2e-5167-4501-8529-d317230ce330] Running
	I0528 21:34:27.809354 1355836 system_pods.go:61] "nvidia-device-plugin-daemonset-p6z9d" [5a8692ef-b68d-4ec3-a15c-1c8c61eff11e] Running
	I0528 21:34:27.809361 1355836 system_pods.go:61] "registry-gjvvs" [769902e5-f85c-4a07-b2c6-d37f1fb19841] Running
	I0528 21:34:27.809364 1355836 system_pods.go:61] "registry-proxy-8zzlh" [acd09f12-58ca-45ba-a43a-ccae6df2d939] Running
	I0528 21:34:27.809367 1355836 system_pods.go:61] "snapshot-controller-745499f584-tqm7g" [759bb548-b796-41f3-a876-a454c4679056] Running
	I0528 21:34:27.809372 1355836 system_pods.go:61] "snapshot-controller-745499f584-w8hf9" [1f31c5a1-57e5-4832-832a-97cf98ffbf32] Running
	I0528 21:34:27.809376 1355836 system_pods.go:61] "storage-provisioner" [120e8c42-5a1a-459e-acae-21a1a864b05d] Running
	I0528 21:34:27.809384 1355836 system_pods.go:74] duration metric: took 11.093731387s to wait for pod list to return data ...
	I0528 21:34:27.809398 1355836 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:34:27.811921 1355836 default_sa.go:45] found service account: "default"
	I0528 21:34:27.811946 1355836 default_sa.go:55] duration metric: took 2.542131ms for default service account to be created ...
	I0528 21:34:27.811955 1355836 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:34:27.822813 1355836 system_pods.go:86] 18 kube-system pods found
	I0528 21:34:27.822851 1355836 system_pods.go:89] "coredns-7db6d8ff4d-5qs7n" [123e6e9f-938c-4637-9df3-48445a053447] Running
	I0528 21:34:27.822859 1355836 system_pods.go:89] "csi-hostpath-attacher-0" [03affddc-37af-40ff-91d0-201caebcf9d4] Running
	I0528 21:34:27.822867 1355836 system_pods.go:89] "csi-hostpath-resizer-0" [9a45d0fd-aba5-4709-b9cd-9bdc1a3ae6d2] Running
	I0528 21:34:27.822871 1355836 system_pods.go:89] "csi-hostpathplugin-whvsm" [a24550b9-f416-492b-aec0-fb3a0247163d] Running
	I0528 21:34:27.822875 1355836 system_pods.go:89] "etcd-addons-504712" [14693a74-3f6d-434a-9659-b8117a7f4cfe] Running
	I0528 21:34:27.822880 1355836 system_pods.go:89] "kindnet-h8d66" [a1157f9e-ea43-46f3-bc60-a3f92737ea52] Running
	I0528 21:34:27.822884 1355836 system_pods.go:89] "kube-apiserver-addons-504712" [f56fa365-330a-4949-acba-efa405848af8] Running
	I0528 21:34:27.822889 1355836 system_pods.go:89] "kube-controller-manager-addons-504712" [8f1a9ee6-4cbc-4605-9900-707045250fa5] Running
	I0528 21:34:27.822899 1355836 system_pods.go:89] "kube-ingress-dns-minikube" [f992c4bf-c862-45ab-bbb9-bc45aa22a765] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0528 21:34:27.822904 1355836 system_pods.go:89] "kube-proxy-kdmkz" [6d9390b9-56ba-40e3-80d9-68427b904453] Running
	I0528 21:34:27.822910 1355836 system_pods.go:89] "kube-scheduler-addons-504712" [fd2276ce-151c-41df-8b1c-a8ec138481f9] Running
	I0528 21:34:27.822914 1355836 system_pods.go:89] "metrics-server-c59844bb4-99j6d" [6c20ca2e-5167-4501-8529-d317230ce330] Running
	I0528 21:34:27.822918 1355836 system_pods.go:89] "nvidia-device-plugin-daemonset-p6z9d" [5a8692ef-b68d-4ec3-a15c-1c8c61eff11e] Running
	I0528 21:34:27.822923 1355836 system_pods.go:89] "registry-gjvvs" [769902e5-f85c-4a07-b2c6-d37f1fb19841] Running
	I0528 21:34:27.822926 1355836 system_pods.go:89] "registry-proxy-8zzlh" [acd09f12-58ca-45ba-a43a-ccae6df2d939] Running
	I0528 21:34:27.822930 1355836 system_pods.go:89] "snapshot-controller-745499f584-tqm7g" [759bb548-b796-41f3-a876-a454c4679056] Running
	I0528 21:34:27.822935 1355836 system_pods.go:89] "snapshot-controller-745499f584-w8hf9" [1f31c5a1-57e5-4832-832a-97cf98ffbf32] Running
	I0528 21:34:27.822946 1355836 system_pods.go:89] "storage-provisioner" [120e8c42-5a1a-459e-acae-21a1a864b05d] Running
	I0528 21:34:27.822954 1355836 system_pods.go:126] duration metric: took 10.993511ms to wait for k8s-apps to be running ...
	I0528 21:34:27.822963 1355836 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:34:27.823044 1355836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:34:27.838495 1355836 system_svc.go:56] duration metric: took 15.521939ms WaitForService to wait for kubelet
	I0528 21:34:27.838522 1355836 kubeadm.go:576] duration metric: took 2m19.702265644s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:34:27.838541 1355836 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:34:27.841994 1355836 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0528 21:34:27.842065 1355836 node_conditions.go:123] node cpu capacity is 2
	I0528 21:34:27.842080 1355836 node_conditions.go:105] duration metric: took 3.532974ms to run NodePressure ...
	I0528 21:34:27.842095 1355836 start.go:240] waiting for startup goroutines ...
	I0528 21:34:27.842105 1355836 start.go:245] waiting for cluster config update ...
	I0528 21:34:27.842122 1355836 start.go:254] writing updated cluster config ...
	I0528 21:34:27.842404 1355836 ssh_runner.go:195] Run: rm -f paused
	I0528 21:34:28.112018 1355836 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:34:28.115564 1355836 out.go:177] * Done! kubectl is now configured to use "addons-504712" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 28 21:38:18 addons-504712 conmon[5306]: conmon bd6842ccd0afc01f4e17 <ninfo>: container 5317 exited with status 137
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.586933213Z" level=info msg="Stopped container bd6842ccd0afc01f4e176963d8a5af2fabf9b7becb3fe30ed88e4da2e84c8495: ingress-nginx/ingress-nginx-controller-768f948f8f-jt859/controller" id=f3ff3788-9690-448e-9214-dc8dafb9e34c name=/runtime.v1.RuntimeService/StopContainer
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.587555105Z" level=info msg="Stopping pod sandbox: 024587536ad620e7b16fafa010863f206553fe435223a3969f92408ad8a57604" id=4ea709df-581e-4196-8741-465ad7164e37 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.591414856Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-BP4NKXKECIILWDA5 - [0:0]\n:KUBE-HP-UBMA6WHXSPJ4TMGI - [0:0]\n-X KUBE-HP-UBMA6WHXSPJ4TMGI\n-X KUBE-HP-BP4NKXKECIILWDA5\nCOMMIT\n"
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.593055750Z" level=info msg="Closing host port tcp:80"
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.593105595Z" level=info msg="Closing host port tcp:443"
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.594558268Z" level=info msg="Host port tcp:80 does not have an open socket"
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.594595936Z" level=info msg="Host port tcp:443 does not have an open socket"
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.594807558Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-768f948f8f-jt859 Namespace:ingress-nginx ID:024587536ad620e7b16fafa010863f206553fe435223a3969f92408ad8a57604 UID:95a0306f-b2eb-43dd-b005-e703baa9ecef NetNS:/var/run/netns/5a0c106e-cc3b-429d-834d-2a0b78924eeb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.594983217Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-768f948f8f-jt859 from CNI network \"kindnet\" (type=ptp)"
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.623751001Z" level=info msg="Stopped pod sandbox: 024587536ad620e7b16fafa010863f206553fe435223a3969f92408ad8a57604" id=4ea709df-581e-4196-8741-465ad7164e37 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.773025244Z" level=info msg="Removing container: bd6842ccd0afc01f4e176963d8a5af2fabf9b7becb3fe30ed88e4da2e84c8495" id=b8f12efd-3bf3-4c4c-889c-4f5071a42f5c name=/runtime.v1.RuntimeService/RemoveContainer
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.789368996Z" level=info msg="Removed container bd6842ccd0afc01f4e176963d8a5af2fabf9b7becb3fe30ed88e4da2e84c8495: ingress-nginx/ingress-nginx-controller-768f948f8f-jt859/controller" id=b8f12efd-3bf3-4c4c-889c-4f5071a42f5c name=/runtime.v1.RuntimeService/RemoveContainer
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.928681824Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=c41cb57b-dff8-43cf-b058-e7968755f8fe name=/runtime.v1.ImageService/ImageStatus
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.928903505Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=c41cb57b-dff8-43cf-b058-e7968755f8fe name=/runtime.v1.ImageService/ImageStatus
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.929949920Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=aff6f019-15e7-4bdd-a4f9-3dbb0e351537 name=/runtime.v1.ImageService/ImageStatus
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.930334812Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=aff6f019-15e7-4bdd-a4f9-3dbb0e351537 name=/runtime.v1.ImageService/ImageStatus
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.931127047Z" level=info msg="Creating container: default/hello-world-app-86c47465fc-bp6tp/hello-world-app" id=b91cd074-7bd6-4b9f-a77e-4afb5ab210d3 name=/runtime.v1.RuntimeService/CreateContainer
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.931230535Z" level=warning msg="Allowed annotations are specified for workload []"
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.990457916Z" level=info msg="Created container e14e961b1da462b102430a407faa7c7b43e01b957586ab508181d61f0d54174a: default/hello-world-app-86c47465fc-bp6tp/hello-world-app" id=b91cd074-7bd6-4b9f-a77e-4afb5ab210d3 name=/runtime.v1.RuntimeService/CreateContainer
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.991313132Z" level=info msg="Starting container: e14e961b1da462b102430a407faa7c7b43e01b957586ab508181d61f0d54174a" id=173e15a0-9365-4125-bb3c-30eb3573bee0 name=/runtime.v1.RuntimeService/StartContainer
	May 28 21:38:18 addons-504712 crio[917]: time="2024-05-28 21:38:18.997861881Z" level=info msg="Started container" PID=8116 containerID=e14e961b1da462b102430a407faa7c7b43e01b957586ab508181d61f0d54174a description=default/hello-world-app-86c47465fc-bp6tp/hello-world-app id=173e15a0-9365-4125-bb3c-30eb3573bee0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a7f164109a5ff1960ca2f43c5d2ee6a1f24d3144fa57420c76de8e2d96de0f87
	May 28 21:38:19 addons-504712 conmon[8105]: conmon e14e961b1da462b10243 <ninfo>: container 8116 exited with status 1
	May 28 21:38:19 addons-504712 crio[917]: time="2024-05-28 21:38:19.777157083Z" level=info msg="Removing container: 60380fee080061c6feae25f1d0990bf48bb3fc987daeb12df073a90b0c47795d" id=d4551907-870f-4a99-ab42-823adc350076 name=/runtime.v1.RuntimeService/RemoveContainer
	May 28 21:38:19 addons-504712 crio[917]: time="2024-05-28 21:38:19.801620655Z" level=info msg="Removed container 60380fee080061c6feae25f1d0990bf48bb3fc987daeb12df073a90b0c47795d: default/hello-world-app-86c47465fc-bp6tp/hello-world-app" id=d4551907-870f-4a99-ab42-823adc350076 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	e14e961b1da46       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             4 seconds ago       Exited              hello-world-app            2                   a7f164109a5ff       hello-world-app-86c47465fc-bp6tp
	3cc4d4ec5577d       docker.io/library/nginx@sha256:05325b3a32db871dc396a859d9a9609d75f50d2f7ad12194f9f3a550111bdcaa                              2 minutes ago       Running             nginx                      0                   611346af517ff       nginx
	cdfcfe22abb7a       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                        3 minutes ago       Running             headlamp                   0                   7cd2a5bfff847       headlamp-68456f997b-48ktv
	09f6b46f6af63       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 4 minutes ago       Running             gcp-auth                   0                   3c6fbe6471648       gcp-auth-5db96cd9b4-ncljx
	11ad75fbd5667       296b5f799fcd8a39f0e93373bc18787d846c6a2a78a5657b1514831f043c09bf                                                             4 minutes ago       Exited              patch                      2                   686e37d5dcfe7       ingress-nginx-admission-patch-7pkcs
	1f5e67487042b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   5 minutes ago       Exited              create                     0                   203b06e08c41e       ingress-nginx-admission-create-z6bwq
	6654641bbbc61       nvcr.io/nvidia/k8s-device-plugin@sha256:1aff0e9f0759758f87cb158d78241472af3a76cdc631f01ab395f997fa80f707                     5 minutes ago       Running             nvidia-device-plugin-ctr   0                   5ee13ad47f80d       nvidia-device-plugin-daemonset-p6z9d
	919ce4e86580e       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             5 minutes ago       Running             local-path-provisioner     0                   81a90bb31ceea       local-path-provisioner-8d985888d-v8pvn
	75672157ce1c5       gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4               5 minutes ago       Running             cloud-spanner-emulator     0                   a4f29b6489013       cloud-spanner-emulator-6fcd4f6f98-4ttqc
	a975aae5c423e       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              5 minutes ago       Running             yakd                       0                   dc706073136c7       yakd-dashboard-5ddbf7d777-bjx67
	6b48727f5c272       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        5 minutes ago       Running             metrics-server             0                   20dabd9b8a819       metrics-server-c59844bb4-99j6d
	0419ea7eeb7f5       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             5 minutes ago       Running             coredns                    0                   112090cbd8803       coredns-7db6d8ff4d-5qs7n
	2c6bd74546fd1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner        0                   06da4576b9735       storage-provisioner
	b2eb2156bfd52       docker.io/kindest/kindnetd@sha256:1770ac17c925dfef54061d598c65310ff99269a3a77d5c7257f04366b38c64be                           6 minutes ago       Running             kindnet-cni                0                   a114c0ce60b1b       kindnet-h8d66
	a291f575a32c1       05eccb821e159092de1ef74c21cd16c403e1cda549ba56687391f50f087310ee                                                             6 minutes ago       Running             kube-proxy                 0                   db59d148453f0       kube-proxy-kdmkz
	5563c029288da       163ff818d154da230c6c8a6211a0d939689ba04df75dba2c3742fe740ffdf44a                                                             6 minutes ago       Running             kube-scheduler             0                   ef1ffb33d4809       kube-scheduler-addons-504712
	e57173f763b7b       988b55d423baf54b1515b7560890c84d1822a8dfbdfcecfb2576f8cb5c2b28ee                                                             6 minutes ago       Running             kube-apiserver             0                   6085e4eb2e9f2       kube-apiserver-addons-504712
	caef4ef03b389       234ac56e455bef8ae70f120f49fcf5daea5c4171c5ffbf8f9e791a516fef99b4                                                             6 minutes ago       Running             kube-controller-manager    0                   2278b3f166154       kube-controller-manager-addons-504712
	3d73169449d92       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                             6 minutes ago       Running             etcd                       0                   762ef4fbfbaf3       etcd-addons-504712
	
	
	==> coredns [0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15] <==
	[INFO] 10.244.0.19:40913 - 13612 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057976s
	[INFO] 10.244.0.19:40913 - 46902 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060429s
	[INFO] 10.244.0.19:40913 - 23347 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069947s
	[INFO] 10.244.0.19:40913 - 59494 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060199s
	[INFO] 10.244.0.19:40913 - 65236 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001276752s
	[INFO] 10.244.0.19:40913 - 62970 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001035321s
	[INFO] 10.244.0.19:40913 - 6759 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072696s
	[INFO] 10.244.0.19:49010 - 6381 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000098147s
	[INFO] 10.244.0.19:49010 - 37352 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065886s
	[INFO] 10.244.0.19:36568 - 8055 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033796s
	[INFO] 10.244.0.19:49010 - 59027 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049492s
	[INFO] 10.244.0.19:36568 - 51570 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000145555s
	[INFO] 10.244.0.19:49010 - 59591 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004205s
	[INFO] 10.244.0.19:49010 - 23738 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000084076s
	[INFO] 10.244.0.19:49010 - 23654 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057951s
	[INFO] 10.244.0.19:36568 - 23252 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000227695s
	[INFO] 10.244.0.19:36568 - 6346 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000065623s
	[INFO] 10.244.0.19:36568 - 59344 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000071423s
	[INFO] 10.244.0.19:36568 - 62212 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064532s
	[INFO] 10.244.0.19:49010 - 42440 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001129276s
	[INFO] 10.244.0.19:36568 - 30844 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001358915s
	[INFO] 10.244.0.19:49010 - 37643 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001042115s
	[INFO] 10.244.0.19:49010 - 48920 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075699s
	[INFO] 10.244.0.19:36568 - 63754 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002150543s
	[INFO] 10.244.0.19:36568 - 16500 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000105261s
	
	
	==> describe nodes <==
	Name:               addons-504712
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-504712
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=addons-504712
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T21_31_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-504712
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:31:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-504712
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:38:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:35:59 +0000   Tue, 28 May 2024 21:31:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:35:59 +0000   Tue, 28 May 2024 21:31:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:35:59 +0000   Tue, 28 May 2024 21:31:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:35:59 +0000   Tue, 28 May 2024 21:32:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-504712
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022436Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022436Ki
	  pods:               110
	System Info:
	  Machine ID:                 366a2bf59c69436784fd8c89b1f0bc70
	  System UUID:                4d1c9ec3-31a0-4142-82b5-ef27d22f688d
	  Boot ID:                    2882d43f-5a85-456c-aec3-876199af1cc0
	  Kernel Version:             5.15.0-1062-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-4ttqc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  default                     hello-world-app-86c47465fc-bp6tp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  gcp-auth                    gcp-auth-5db96cd9b4-ncljx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  headlamp                    headlamp-68456f997b-48ktv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 coredns-7db6d8ff4d-5qs7n                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m15s
	  kube-system                 etcd-addons-504712                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m29s
	  kube-system                 kindnet-h8d66                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m16s
	  kube-system                 kube-apiserver-addons-504712               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  kube-system                 kube-controller-manager-addons-504712      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  kube-system                 kube-proxy-kdmkz                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	  kube-system                 kube-scheduler-addons-504712               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  kube-system                 metrics-server-c59844bb4-99j6d             100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m10s
	  kube-system                 nvidia-device-plugin-daemonset-p6z9d       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  local-path-storage          local-path-provisioner-8d985888d-v8pvn     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-bjx67            0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             548Mi (6%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m10s                  kube-proxy       
	  Normal  Starting                 6m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m30s (x2 over 6m30s)  kubelet          Node addons-504712 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s (x2 over 6m30s)  kubelet          Node addons-504712 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s (x2 over 6m30s)  kubelet          Node addons-504712 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m16s                  node-controller  Node addons-504712 event: Registered Node addons-504712 in Controller
	  Normal  NodeReady                5m40s                  kubelet          Node addons-504712 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001076] FS-Cache: O-key=[8] '11d6c90000000000'
	[  +0.000728] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000993] FS-Cache: N-cookie d=000000000847e55d{9p.inode} n=00000000559aec3f
	[  +0.001096] FS-Cache: N-key=[8] '11d6c90000000000'
	[  +0.002722] FS-Cache: Duplicate cookie detected
	[  +0.000859] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=000000000847e55d{9p.inode} n=00000000e8e33e92
	[  +0.001094] FS-Cache: O-key=[8] '11d6c90000000000'
	[  +0.000747] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001030] FS-Cache: N-cookie d=000000000847e55d{9p.inode} n=000000000318bcd7
	[  +0.001118] FS-Cache: N-key=[8] '11d6c90000000000'
	[  +2.671416] FS-Cache: Duplicate cookie detected
	[  +0.000732] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000998] FS-Cache: O-cookie d=000000000847e55d{9p.inode} n=000000003d37d697
	[  +0.001081] FS-Cache: O-key=[8] '10d6c90000000000'
	[  +0.000796] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=000000000847e55d{9p.inode} n=00000000559aec3f
	[  +0.001092] FS-Cache: N-key=[8] '10d6c90000000000'
	[  +0.273823] FS-Cache: Duplicate cookie detected
	[  +0.000699] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001041] FS-Cache: O-cookie d=000000000847e55d{9p.inode} n=00000000bc6509f8
	[  +0.001083] FS-Cache: O-key=[8] '16d6c90000000000'
	[  +0.000760] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001001] FS-Cache: N-cookie d=000000000847e55d{9p.inode} n=00000000bc4002c6
	[  +0.001077] FS-Cache: N-key=[8] '16d6c90000000000'
	
	
	==> etcd [3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2] <==
	{"level":"info","ts":"2024-05-28T21:31:47.512143Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-28T21:31:47.512218Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-28T21:31:47.512376Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-28T21:31:47.512424Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-28T21:31:48.174077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-28T21:31:48.174196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-28T21:31:48.174248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-05-28T21:31:48.174289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-05-28T21:31:48.174319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-28T21:31:48.174355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-05-28T21:31:48.17439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-28T21:31:48.178156Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:31:48.182253Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-504712 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T21:31:48.18244Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:31:48.182536Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:31:48.182624Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:31:48.182667Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:31:48.182715Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:31:48.189933Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T21:31:48.190053Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T21:31:48.194548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-28T21:31:48.221021Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-05-28T21:32:08.774499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.075253ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128029487088505457 > lease_revoke:<id:70cc8fc11dcf90f0>","response":"size:29"}
	{"level":"info","ts":"2024-05-28T21:32:13.39659Z","caller":"traceutil/trace.go:171","msg":"trace[1632236740] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"124.438403ms","start":"2024-05-28T21:32:13.272135Z","end":"2024-05-28T21:32:13.396573Z","steps":["trace[1632236740] 'process raft request'  (duration: 99.187766ms)","trace[1632236740] 'compare'  (duration: 24.722913ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T21:32:13.397Z","caller":"traceutil/trace.go:171","msg":"trace[1809514941] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"103.038279ms","start":"2024-05-28T21:32:13.293949Z","end":"2024-05-28T21:32:13.396987Z","steps":["trace[1809514941] 'process raft request'  (duration: 102.180971ms)"],"step_count":1}
	
	
	==> gcp-auth [09f6b46f6af630a6eb46070640076f1cc2f04ac10bc9e1fab9b522a482ce6d55] <==
	2024/05/28 21:33:56 GCP Auth Webhook started!
	2024/05/28 21:34:28 Ready to marshal response ...
	2024/05/28 21:34:28 Ready to write response ...
	2024/05/28 21:34:29 Ready to marshal response ...
	2024/05/28 21:34:29 Ready to write response ...
	2024/05/28 21:34:29 Ready to marshal response ...
	2024/05/28 21:34:29 Ready to write response ...
	2024/05/28 21:34:39 Ready to marshal response ...
	2024/05/28 21:34:39 Ready to write response ...
	2024/05/28 21:34:43 Ready to marshal response ...
	2024/05/28 21:34:43 Ready to write response ...
	2024/05/28 21:35:11 Ready to marshal response ...
	2024/05/28 21:35:11 Ready to write response ...
	2024/05/28 21:35:39 Ready to marshal response ...
	2024/05/28 21:35:39 Ready to write response ...
	2024/05/28 21:37:57 Ready to marshal response ...
	2024/05/28 21:37:57 Ready to write response ...
	
	
	==> kernel <==
	 21:38:24 up  5:20,  0 users,  load average: 0.06, 0.72, 1.56
	Linux addons-504712 5.15.0-1062-aws #68~20.04.1-Ubuntu SMP Tue May 7 11:50:35 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a] <==
	I0528 21:36:23.739364       1 main.go:227] handling current node
	I0528 21:36:33.744085       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:36:33.744118       1 main.go:227] handling current node
	I0528 21:36:43.761801       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:36:43.761831       1 main.go:227] handling current node
	I0528 21:36:53.766472       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:36:53.766502       1 main.go:227] handling current node
	I0528 21:37:03.778492       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:37:03.778525       1 main.go:227] handling current node
	I0528 21:37:13.782202       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:37:13.782227       1 main.go:227] handling current node
	I0528 21:37:23.789639       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:37:23.789669       1 main.go:227] handling current node
	I0528 21:37:33.799278       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:37:33.799305       1 main.go:227] handling current node
	I0528 21:37:43.803600       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:37:43.803630       1 main.go:227] handling current node
	I0528 21:37:53.814871       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:37:53.814898       1 main.go:227] handling current node
	I0528 21:38:03.819589       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:38:03.819617       1 main.go:227] handling current node
	I0528 21:38:13.825015       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:38:13.825046       1 main.go:227] handling current node
	I0528 21:38:23.840831       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:38:23.840861       1 main.go:227] handling current node
	
	
	==> kube-apiserver [e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81] <==
	W0528 21:33:54.150106       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 21:33:54.150169       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0528 21:33:54.150824       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.253.229:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.253.229:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.253.229:443: connect: connection refused
	E0528 21:33:54.158694       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.253.229:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.253.229:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.253.229:443: connect: connection refused
	E0528 21:33:54.179599       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.253.229:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.253.229:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.253.229:443: connect: connection refused
	I0528 21:33:54.406706       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0528 21:34:29.010752       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.170.46"}
	I0528 21:34:55.673159       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0528 21:35:27.108335       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 21:35:27.108424       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0528 21:35:27.148797       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 21:35:27.148841       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0528 21:35:27.157958       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 21:35:27.158048       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0528 21:35:27.204369       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 21:35:27.204458       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0528 21:35:28.149175       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0528 21:35:28.205215       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0528 21:35:28.211369       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0528 21:35:33.877725       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0528 21:35:34.907804       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0528 21:35:39.434992       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0528 21:35:39.724732       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.220.114"}
	I0528 21:37:58.091722       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.73.5"}
	
	
	==> kube-controller-manager [caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8] <==
	E0528 21:37:19.699132       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 21:37:24.380556       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:37:24.380593       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 21:37:32.442574       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:37:32.442613       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 21:37:45.750104       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:37:45.750139       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 21:37:57.903566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="48.661403ms"
	I0528 21:37:57.934235       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="30.529413ms"
	I0528 21:37:57.934391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="37.243µs"
	I0528 21:38:01.753586       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="38.719µs"
	I0528 21:38:02.746058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="79.21µs"
	W0528 21:38:02.817328       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:38:02.817395       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 21:38:03.745890       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="43.748µs"
	W0528 21:38:06.944231       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:38:06.944269       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 21:38:09.765747       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:38:09.765785       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 21:38:15.416939       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0528 21:38:15.422302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="5.111µs"
	I0528 21:38:15.427153       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	W0528 21:38:16.539704       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:38:16.539841       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 21:38:19.790800       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="42.001µs"
	
	
	==> kube-proxy [a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb] <==
	I0528 21:32:12.913644       1 server_linux.go:69] "Using iptables proxy"
	I0528 21:32:13.330840       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0528 21:32:13.787059       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0528 21:32:13.787201       1 server_linux.go:165] "Using iptables Proxier"
	I0528 21:32:13.797610       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0528 21:32:13.797715       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0528 21:32:13.797765       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 21:32:13.798374       1 server.go:872] "Version info" version="v1.30.1"
	I0528 21:32:13.798442       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:32:13.799437       1 config.go:192] "Starting service config controller"
	I0528 21:32:13.802375       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 21:32:13.802484       1 config.go:101] "Starting endpoint slice config controller"
	I0528 21:32:13.802862       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 21:32:13.803446       1 config.go:319] "Starting node config controller"
	I0528 21:32:13.803500       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 21:32:13.904077       1 shared_informer.go:320] Caches are synced for node config
	I0528 21:32:13.927962       1 shared_informer.go:320] Caches are synced for service config
	I0528 21:32:13.946162       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c] <==
	W0528 21:31:51.843315       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 21:31:51.847478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0528 21:31:51.843350       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 21:31:51.847516       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0528 21:31:51.843382       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 21:31:51.847537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 21:31:51.844048       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 21:31:51.847562       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:31:51.847010       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 21:31:51.847583       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 21:31:51.847078       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0528 21:31:51.847597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0528 21:31:51.847135       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0528 21:31:51.847611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0528 21:31:51.847187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 21:31:51.847635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0528 21:31:51.847247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 21:31:51.847658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0528 21:31:51.847297       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 21:31:51.847671       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0528 21:31:52.692492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0528 21:31:52.692532       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0528 21:31:52.736567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 21:31:52.736677       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0528 21:31:53.428967       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 21:38:03 addons-504712 kubelet[1526]: E0528 21:38:03.734299    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-bp6tp_default(a28bb037-057e-4eeb-9aa1-aea54533e1ee)\"" pod="default/hello-world-app-86c47465fc-bp6tp" podUID="a28bb037-057e-4eeb-9aa1-aea54533e1ee"
	May 28 21:38:05 addons-504712 kubelet[1526]: I0528 21:38:05.928197    1526 scope.go:117] "RemoveContainer" containerID="b33a7aef80e6917d4184fa37e6291c1bbd9cd88efe8e11009f43eafe57398a6a"
	May 28 21:38:05 addons-504712 kubelet[1526]: E0528 21:38:05.928500    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f992c4bf-c862-45ab-bbb9-bc45aa22a765)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="f992c4bf-c862-45ab-bbb9-bc45aa22a765"
	May 28 21:38:13 addons-504712 kubelet[1526]: I0528 21:38:13.888794    1526 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xzzr\" (UniqueName: \"kubernetes.io/projected/f992c4bf-c862-45ab-bbb9-bc45aa22a765-kube-api-access-6xzzr\") pod \"f992c4bf-c862-45ab-bbb9-bc45aa22a765\" (UID: \"f992c4bf-c862-45ab-bbb9-bc45aa22a765\") "
	May 28 21:38:13 addons-504712 kubelet[1526]: I0528 21:38:13.892523    1526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f992c4bf-c862-45ab-bbb9-bc45aa22a765-kube-api-access-6xzzr" (OuterVolumeSpecName: "kube-api-access-6xzzr") pod "f992c4bf-c862-45ab-bbb9-bc45aa22a765" (UID: "f992c4bf-c862-45ab-bbb9-bc45aa22a765"). InnerVolumeSpecName "kube-api-access-6xzzr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 28 21:38:13 addons-504712 kubelet[1526]: I0528 21:38:13.989525    1526 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6xzzr\" (UniqueName: \"kubernetes.io/projected/f992c4bf-c862-45ab-bbb9-bc45aa22a765-kube-api-access-6xzzr\") on node \"addons-504712\" DevicePath \"\""
	May 28 21:38:14 addons-504712 kubelet[1526]: I0528 21:38:14.758427    1526 scope.go:117] "RemoveContainer" containerID="b33a7aef80e6917d4184fa37e6291c1bbd9cd88efe8e11009f43eafe57398a6a"
	May 28 21:38:15 addons-504712 kubelet[1526]: I0528 21:38:15.929076    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01eab89e-5da8-42a6-a8a0-5a70fbcc4588" path="/var/lib/kubelet/pods/01eab89e-5da8-42a6-a8a0-5a70fbcc4588/volumes"
	May 28 21:38:15 addons-504712 kubelet[1526]: I0528 21:38:15.929505    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f112e43d-456e-4975-ab8c-0ff3b6fbb852" path="/var/lib/kubelet/pods/f112e43d-456e-4975-ab8c-0ff3b6fbb852/volumes"
	May 28 21:38:15 addons-504712 kubelet[1526]: I0528 21:38:15.929841    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f992c4bf-c862-45ab-bbb9-bc45aa22a765" path="/var/lib/kubelet/pods/f992c4bf-c862-45ab-bbb9-bc45aa22a765/volumes"
	May 28 21:38:18 addons-504712 kubelet[1526]: I0528 21:38:18.722547    1526 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95a0306f-b2eb-43dd-b005-e703baa9ecef-webhook-cert\") pod \"95a0306f-b2eb-43dd-b005-e703baa9ecef\" (UID: \"95a0306f-b2eb-43dd-b005-e703baa9ecef\") "
	May 28 21:38:18 addons-504712 kubelet[1526]: I0528 21:38:18.722609    1526 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cqvq\" (UniqueName: \"kubernetes.io/projected/95a0306f-b2eb-43dd-b005-e703baa9ecef-kube-api-access-4cqvq\") pod \"95a0306f-b2eb-43dd-b005-e703baa9ecef\" (UID: \"95a0306f-b2eb-43dd-b005-e703baa9ecef\") "
	May 28 21:38:18 addons-504712 kubelet[1526]: I0528 21:38:18.725079    1526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a0306f-b2eb-43dd-b005-e703baa9ecef-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "95a0306f-b2eb-43dd-b005-e703baa9ecef" (UID: "95a0306f-b2eb-43dd-b005-e703baa9ecef"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 28 21:38:18 addons-504712 kubelet[1526]: I0528 21:38:18.727959    1526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a0306f-b2eb-43dd-b005-e703baa9ecef-kube-api-access-4cqvq" (OuterVolumeSpecName: "kube-api-access-4cqvq") pod "95a0306f-b2eb-43dd-b005-e703baa9ecef" (UID: "95a0306f-b2eb-43dd-b005-e703baa9ecef"). InnerVolumeSpecName "kube-api-access-4cqvq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 28 21:38:18 addons-504712 kubelet[1526]: I0528 21:38:18.771198    1526 scope.go:117] "RemoveContainer" containerID="bd6842ccd0afc01f4e176963d8a5af2fabf9b7becb3fe30ed88e4da2e84c8495"
	May 28 21:38:18 addons-504712 kubelet[1526]: I0528 21:38:18.789766    1526 scope.go:117] "RemoveContainer" containerID="bd6842ccd0afc01f4e176963d8a5af2fabf9b7becb3fe30ed88e4da2e84c8495"
	May 28 21:38:18 addons-504712 kubelet[1526]: E0528 21:38:18.790156    1526 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd6842ccd0afc01f4e176963d8a5af2fabf9b7becb3fe30ed88e4da2e84c8495\": container with ID starting with bd6842ccd0afc01f4e176963d8a5af2fabf9b7becb3fe30ed88e4da2e84c8495 not found: ID does not exist" containerID="bd6842ccd0afc01f4e176963d8a5af2fabf9b7becb3fe30ed88e4da2e84c8495"
	May 28 21:38:18 addons-504712 kubelet[1526]: I0528 21:38:18.790196    1526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd6842ccd0afc01f4e176963d8a5af2fabf9b7becb3fe30ed88e4da2e84c8495"} err="failed to get container status \"bd6842ccd0afc01f4e176963d8a5af2fabf9b7becb3fe30ed88e4da2e84c8495\": rpc error: code = NotFound desc = could not find container \"bd6842ccd0afc01f4e176963d8a5af2fabf9b7becb3fe30ed88e4da2e84c8495\": container with ID starting with bd6842ccd0afc01f4e176963d8a5af2fabf9b7becb3fe30ed88e4da2e84c8495 not found: ID does not exist"
	May 28 21:38:18 addons-504712 kubelet[1526]: I0528 21:38:18.823543    1526 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95a0306f-b2eb-43dd-b005-e703baa9ecef-webhook-cert\") on node \"addons-504712\" DevicePath \"\""
	May 28 21:38:18 addons-504712 kubelet[1526]: I0528 21:38:18.823582    1526 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4cqvq\" (UniqueName: \"kubernetes.io/projected/95a0306f-b2eb-43dd-b005-e703baa9ecef-kube-api-access-4cqvq\") on node \"addons-504712\" DevicePath \"\""
	May 28 21:38:18 addons-504712 kubelet[1526]: I0528 21:38:18.928087    1526 scope.go:117] "RemoveContainer" containerID="60380fee080061c6feae25f1d0990bf48bb3fc987daeb12df073a90b0c47795d"
	May 28 21:38:19 addons-504712 kubelet[1526]: I0528 21:38:19.774842    1526 scope.go:117] "RemoveContainer" containerID="60380fee080061c6feae25f1d0990bf48bb3fc987daeb12df073a90b0c47795d"
	May 28 21:38:19 addons-504712 kubelet[1526]: I0528 21:38:19.775133    1526 scope.go:117] "RemoveContainer" containerID="e14e961b1da462b102430a407faa7c7b43e01b957586ab508181d61f0d54174a"
	May 28 21:38:19 addons-504712 kubelet[1526]: E0528 21:38:19.775377    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-bp6tp_default(a28bb037-057e-4eeb-9aa1-aea54533e1ee)\"" pod="default/hello-world-app-86c47465fc-bp6tp" podUID="a28bb037-057e-4eeb-9aa1-aea54533e1ee"
	May 28 21:38:19 addons-504712 kubelet[1526]: I0528 21:38:19.929853    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a0306f-b2eb-43dd-b005-e703baa9ecef" path="/var/lib/kubelet/pods/95a0306f-b2eb-43dd-b005-e703baa9ecef/volumes"
	
	
	==> storage-provisioner [2c6bd74546fd14a99d77c44d780b2f3861328b347ffad4348ed7d992edbe4b84] <==
	I0528 21:32:44.664134       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 21:32:44.680011       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 21:32:44.680055       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 21:32:44.690343       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 21:32:44.691008       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-504712_146d4151-1cbb-4b0d-a24c-cd489a309f6b!
	I0528 21:32:44.691778       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a1bb1130-8ffe-43f7-a0d5-c9295411015b", APIVersion:"v1", ResourceVersion:"912", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-504712_146d4151-1cbb-4b0d-a24c-cd489a309f6b became leader
	I0528 21:32:44.791167       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-504712_146d4151-1cbb-4b0d-a24c-cd489a309f6b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-504712 -n addons-504712
helpers_test.go:261: (dbg) Run:  kubectl --context addons-504712 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (165.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (366.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.300798ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-99j6d" [6c20ca2e-5167-4501-8529-d317230ce330] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004885637s
addons_test.go:417: (dbg) Run:  kubectl --context addons-504712 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-504712 top pods -n kube-system: exit status 1 (97.093502ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5qs7n, age: 2m42.218973434s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-504712 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-504712 top pods -n kube-system: exit status 1 (94.847463ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5qs7n, age: 2m44.124958815s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-504712 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-504712 top pods -n kube-system: exit status 1 (94.078169ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5qs7n, age: 2m49.946001451s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-504712 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-504712 top pods -n kube-system: exit status 1 (102.160246ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5qs7n, age: 2m58.36056781s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-504712 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-504712 top pods -n kube-system: exit status 1 (97.068683ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5qs7n, age: 3m6.360894452s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-504712 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-504712 top pods -n kube-system: exit status 1 (116.268805ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5qs7n, age: 3m25.561304943s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-504712 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-504712 top pods -n kube-system: exit status 1 (86.921713ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5qs7n, age: 3m37.490475781s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-504712 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-504712 top pods -n kube-system: exit status 1 (92.188519ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5qs7n, age: 4m20.11705263s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-504712 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-504712 top pods -n kube-system: exit status 1 (89.43713ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5qs7n, age: 5m7.453413989s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-504712 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-504712 top pods -n kube-system: exit status 1 (89.408724ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5qs7n, age: 6m32.377038364s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-504712 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-504712 top pods -n kube-system: exit status 1 (88.720168ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5qs7n, age: 7m19.091899939s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-504712 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-504712 top pods -n kube-system: exit status 1 (86.155974ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5qs7n, age: 8m39.368553545s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-504712 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-504712
helpers_test.go:235: (dbg) docker inspect addons-504712:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "85f22f353ce0c8e0c69785bc168aeadfc8ca833607ef585f03d7b1f534c00b55",
	        "Created": "2024-05-28T21:31:29.068232415Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1356311,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-28T21:31:29.377870683Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:acea75078737755d2f999491dfa245ea1d1040bffc73283b8c9ba9ff1fde89b5",
	        "ResolvConfPath": "/var/lib/docker/containers/85f22f353ce0c8e0c69785bc168aeadfc8ca833607ef585f03d7b1f534c00b55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/85f22f353ce0c8e0c69785bc168aeadfc8ca833607ef585f03d7b1f534c00b55/hostname",
	        "HostsPath": "/var/lib/docker/containers/85f22f353ce0c8e0c69785bc168aeadfc8ca833607ef585f03d7b1f534c00b55/hosts",
	        "LogPath": "/var/lib/docker/containers/85f22f353ce0c8e0c69785bc168aeadfc8ca833607ef585f03d7b1f534c00b55/85f22f353ce0c8e0c69785bc168aeadfc8ca833607ef585f03d7b1f534c00b55-json.log",
	        "Name": "/addons-504712",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-504712:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-504712",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1920e00c1df99f4235a564e576a7b6842927faf5f87245cfbc623c3a9deeb1f1-init/diff:/var/lib/docker/overlay2/41cb90b313a958e97d6c40ed76425369b134e98a770fd8f601707592b588c01d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1920e00c1df99f4235a564e576a7b6842927faf5f87245cfbc623c3a9deeb1f1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1920e00c1df99f4235a564e576a7b6842927faf5f87245cfbc623c3a9deeb1f1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1920e00c1df99f4235a564e576a7b6842927faf5f87245cfbc623c3a9deeb1f1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-504712",
	                "Source": "/var/lib/docker/volumes/addons-504712/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-504712",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-504712",
	                "name.minikube.sigs.k8s.io": "addons-504712",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "423b96190bfe57e45830045019e4a6e241934393bb3cf692800588c6d4a84066",
	            "SandboxKey": "/var/run/docker/netns/423b96190bfe",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34299"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34298"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34295"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34297"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34296"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-504712": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "fea6bb4deb3b35b9f5deef4dc891bc567bf387551481e264d8c46eac4277403b",
	                    "EndpointID": "a87268d110dd188941d37a81f6013d79545058df629ca07aac8b97c34dc6e299",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-504712",
	                        "85f22f353ce0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-504712 -n addons-504712
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-504712 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-504712 logs -n 25: (1.489210293s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-723265                                                                     | download-only-723265   | jenkins | v1.33.1 | 28 May 24 21:31 UTC | 28 May 24 21:31 UTC |
	| delete  | -p download-only-064906                                                                     | download-only-064906   | jenkins | v1.33.1 | 28 May 24 21:31 UTC | 28 May 24 21:31 UTC |
	| delete  | -p download-only-723265                                                                     | download-only-723265   | jenkins | v1.33.1 | 28 May 24 21:31 UTC | 28 May 24 21:31 UTC |
	| start   | --download-only -p                                                                          | download-docker-966905 | jenkins | v1.33.1 | 28 May 24 21:31 UTC |                     |
	|         | download-docker-966905                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-966905                                                                   | download-docker-966905 | jenkins | v1.33.1 | 28 May 24 21:31 UTC | 28 May 24 21:31 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-950707   | jenkins | v1.33.1 | 28 May 24 21:31 UTC |                     |
	|         | binary-mirror-950707                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38569                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-950707                                                                     | binary-mirror-950707   | jenkins | v1.33.1 | 28 May 24 21:31 UTC | 28 May 24 21:31 UTC |
	| addons  | enable dashboard -p                                                                         | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:31 UTC |                     |
	|         | addons-504712                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:31 UTC |                     |
	|         | addons-504712                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-504712 --wait=true                                                                | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:31 UTC | 28 May 24 21:34 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:34 UTC | 28 May 24 21:34 UTC |
	|         | -p addons-504712                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-504712 ip                                                                            | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:34 UTC | 28 May 24 21:34 UTC |
	| addons  | addons-504712 addons disable                                                                | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:34 UTC | 28 May 24 21:34 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-504712 addons                                                                        | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-504712 addons                                                                        | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | addons-504712                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-504712 ssh curl -s                                                                   | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:35 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-504712 ip                                                                            | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:37 UTC | 28 May 24 21:37 UTC |
	| addons  | addons-504712 addons disable                                                                | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:38 UTC | 28 May 24 21:38 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-504712 addons disable                                                                | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:38 UTC | 28 May 24 21:38 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:38 UTC | 28 May 24 21:38 UTC |
	|         | -p addons-504712                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-504712 ssh cat                                                                       | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:38 UTC | 28 May 24 21:38 UTC |
	|         | /opt/local-path-provisioner/pvc-077d552a-8848-4a18-94e4-4aa30dc26f1e_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-504712 addons disable                                                                | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:38 UTC | 28 May 24 21:39 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | addons-504712                                                                               |                        |         |         |                     |                     |
	| addons  | addons-504712 addons                                                                        | addons-504712          | jenkins | v1.33.1 | 28 May 24 21:40 UTC | 28 May 24 21:40 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:31:04
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:31:04.624360 1355836 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:31:04.624794 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:31:04.624810 1355836 out.go:304] Setting ErrFile to fd 2...
	I0528 21:31:04.624816 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:31:04.625108 1355836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
	I0528 21:31:04.625624 1355836 out.go:298] Setting JSON to false
	I0528 21:31:04.626534 1355836 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":18813,"bootTime":1716913052,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0528 21:31:04.626608 1355836 start.go:139] virtualization:  
	I0528 21:31:04.631210 1355836 out.go:177] * [addons-504712] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0528 21:31:04.633562 1355836 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:31:04.633611 1355836 notify.go:220] Checking for updates...
	I0528 21:31:04.636077 1355836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:31:04.638572 1355836 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	I0528 21:31:04.640672 1355836 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	I0528 21:31:04.642899 1355836 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0528 21:31:04.644998 1355836 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:31:04.647252 1355836 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:31:04.667144 1355836 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 21:31:04.667259 1355836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:31:04.733765 1355836 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2024-05-28 21:31:04.723790769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:31:04.733876 1355836 docker.go:295] overlay module found
	I0528 21:31:04.736144 1355836 out.go:177] * Using the docker driver based on user configuration
	I0528 21:31:04.738188 1355836 start.go:297] selected driver: docker
	I0528 21:31:04.738203 1355836 start.go:901] validating driver "docker" against <nil>
	I0528 21:31:04.738215 1355836 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:31:04.738844 1355836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:31:04.791323 1355836 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2024-05-28 21:31:04.782648082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:31:04.791493 1355836 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 21:31:04.791719 1355836 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:31:04.793910 1355836 out.go:177] * Using Docker driver with root privileges
	I0528 21:31:04.795590 1355836 cni.go:84] Creating CNI manager for ""
	I0528 21:31:04.795618 1355836 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0528 21:31:04.795631 1355836 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0528 21:31:04.795713 1355836 start.go:340] cluster config:
	{Name:addons-504712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-504712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:31:04.799475 1355836 out.go:177] * Starting "addons-504712" primary control-plane node in "addons-504712" cluster
	I0528 21:31:04.801179 1355836 cache.go:121] Beginning downloading kic base image for docker with crio
	I0528 21:31:04.803182 1355836 out.go:177] * Pulling base image v0.0.44-1716228441-18934 ...
	I0528 21:31:04.805091 1355836 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:31:04.805147 1355836 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon
	I0528 21:31:04.805157 1355836 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4
	I0528 21:31:04.805252 1355836 cache.go:56] Caching tarball of preloaded images
	I0528 21:31:04.805343 1355836 preload.go:173] Found /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0528 21:31:04.805357 1355836 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 21:31:04.805728 1355836 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/config.json ...
	I0528 21:31:04.805755 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/config.json: {Name:mk5719af5c4179174a3b9bff9067a58daf99fa48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:04.819950 1355836 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 to local cache
	I0528 21:31:04.820054 1355836 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local cache directory
	I0528 21:31:04.820072 1355836 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local cache directory, skipping pull
	I0528 21:31:04.820077 1355836 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 exists in cache, skipping pull
	I0528 21:31:04.820083 1355836 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 as a tarball
	I0528 21:31:04.820089 1355836 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 from local cache
	I0528 21:31:21.353851 1355836 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 from cached tarball
	I0528 21:31:21.353911 1355836 cache.go:194] Successfully downloaded all kic artifacts
	I0528 21:31:21.353940 1355836 start.go:360] acquireMachinesLock for addons-504712: {Name:mk8939d43682ac81e4cc316266fc1208eccf5792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:31:21.354138 1355836 start.go:364] duration metric: took 180.122µs to acquireMachinesLock for "addons-504712"
	I0528 21:31:21.354171 1355836 start.go:93] Provisioning new machine with config: &{Name:addons-504712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-504712 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 21:31:21.354247 1355836 start.go:125] createHost starting for "" (driver="docker")
	I0528 21:31:21.357355 1355836 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0528 21:31:21.357617 1355836 start.go:159] libmachine.API.Create for "addons-504712" (driver="docker")
	I0528 21:31:21.357663 1355836 client.go:168] LocalClient.Create starting
	I0528 21:31:21.357787 1355836 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem
	I0528 21:31:21.802430 1355836 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/cert.pem
	I0528 21:31:22.683183 1355836 cli_runner.go:164] Run: docker network inspect addons-504712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0528 21:31:22.697789 1355836 cli_runner.go:211] docker network inspect addons-504712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0528 21:31:22.697874 1355836 network_create.go:281] running [docker network inspect addons-504712] to gather additional debugging logs...
	I0528 21:31:22.697895 1355836 cli_runner.go:164] Run: docker network inspect addons-504712
	W0528 21:31:22.713425 1355836 cli_runner.go:211] docker network inspect addons-504712 returned with exit code 1
	I0528 21:31:22.713454 1355836 network_create.go:284] error running [docker network inspect addons-504712]: docker network inspect addons-504712: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-504712 not found
	I0528 21:31:22.713467 1355836 network_create.go:286] output of [docker network inspect addons-504712]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-504712 not found
	
	** /stderr **
	I0528 21:31:22.713564 1355836 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0528 21:31:22.729001 1355836 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40006d13c0}
	I0528 21:31:22.729044 1355836 network_create.go:124] attempt to create docker network addons-504712 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0528 21:31:22.729101 1355836 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-504712 addons-504712
	I0528 21:31:22.786258 1355836 network_create.go:108] docker network addons-504712 192.168.49.0/24 created
	I0528 21:31:22.786289 1355836 kic.go:121] calculated static IP "192.168.49.2" for the "addons-504712" container
	I0528 21:31:22.786359 1355836 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0528 21:31:22.800230 1355836 cli_runner.go:164] Run: docker volume create addons-504712 --label name.minikube.sigs.k8s.io=addons-504712 --label created_by.minikube.sigs.k8s.io=true
	I0528 21:31:22.816487 1355836 oci.go:103] Successfully created a docker volume addons-504712
	I0528 21:31:22.816577 1355836 cli_runner.go:164] Run: docker run --rm --name addons-504712-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-504712 --entrypoint /usr/bin/test -v addons-504712:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 -d /var/lib
	I0528 21:31:24.884865 1355836 cli_runner.go:217] Completed: docker run --rm --name addons-504712-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-504712 --entrypoint /usr/bin/test -v addons-504712:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 -d /var/lib: (2.068237618s)
	I0528 21:31:24.884895 1355836 oci.go:107] Successfully prepared a docker volume addons-504712
	I0528 21:31:24.884928 1355836 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:31:24.884949 1355836 kic.go:194] Starting extracting preloaded images to volume ...
	I0528 21:31:24.885031 1355836 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-504712:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 -I lz4 -xf /preloaded.tar -C /extractDir
	I0528 21:31:29.005516 1355836 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-504712:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 -I lz4 -xf /preloaded.tar -C /extractDir: (4.120441729s)
	I0528 21:31:29.005565 1355836 kic.go:203] duration metric: took 4.120598509s to extract preloaded images to volume ...
	W0528 21:31:29.005723 1355836 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0528 21:31:29.005846 1355836 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0528 21:31:29.053901 1355836 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-504712 --name addons-504712 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-504712 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-504712 --network addons-504712 --ip 192.168.49.2 --volume addons-504712:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862
	I0528 21:31:29.389481 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Running}}
	I0528 21:31:29.417478 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:31:29.439926 1355836 cli_runner.go:164] Run: docker exec addons-504712 stat /var/lib/dpkg/alternatives/iptables
	I0528 21:31:29.503537 1355836 oci.go:144] the created container "addons-504712" has a running status.
	I0528 21:31:29.503565 1355836 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa...
	I0528 21:31:29.860846 1355836 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0528 21:31:29.889501 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:31:29.926184 1355836 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0528 21:31:29.926204 1355836 kic_runner.go:114] Args: [docker exec --privileged addons-504712 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0528 21:31:30.008673 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:31:30.032311 1355836 machine.go:94] provisionDockerMachine start ...
	I0528 21:31:30.032431 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:30.055272 1355836 main.go:141] libmachine: Using SSH client type: native
	I0528 21:31:30.055571 1355836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34299 <nil> <nil>}
	I0528 21:31:30.055590 1355836 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:31:30.222097 1355836 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-504712
	
	I0528 21:31:30.222124 1355836 ubuntu.go:169] provisioning hostname "addons-504712"
	I0528 21:31:30.222231 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:30.244355 1355836 main.go:141] libmachine: Using SSH client type: native
	I0528 21:31:30.244601 1355836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34299 <nil> <nil>}
	I0528 21:31:30.244619 1355836 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-504712 && echo "addons-504712" | sudo tee /etc/hostname
	I0528 21:31:30.387661 1355836 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-504712
	
	I0528 21:31:30.387743 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:30.414178 1355836 main.go:141] libmachine: Using SSH client type: native
	I0528 21:31:30.414438 1355836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34299 <nil> <nil>}
	I0528 21:31:30.414461 1355836 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-504712' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-504712/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-504712' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:31:30.538155 1355836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:31:30.538181 1355836 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18966-1349783/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-1349783/.minikube}
	I0528 21:31:30.538201 1355836 ubuntu.go:177] setting up certificates
	I0528 21:31:30.538213 1355836 provision.go:84] configureAuth start
	I0528 21:31:30.538278 1355836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-504712
	I0528 21:31:30.556984 1355836 provision.go:143] copyHostCerts
	I0528 21:31:30.557084 1355836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.pem (1082 bytes)
	I0528 21:31:30.557222 1355836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-1349783/.minikube/cert.pem (1123 bytes)
	I0528 21:31:30.557298 1355836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-1349783/.minikube/key.pem (1675 bytes)
	I0528 21:31:30.557361 1355836 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca-key.pem org=jenkins.addons-504712 san=[127.0.0.1 192.168.49.2 addons-504712 localhost minikube]
	I0528 21:31:30.760799 1355836 provision.go:177] copyRemoteCerts
	I0528 21:31:30.760895 1355836 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:31:30.760947 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:30.778218 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:31:30.866885 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 21:31:30.890485 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0528 21:31:30.913436 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 21:31:30.937074 1355836 provision.go:87] duration metric: took 398.847464ms to configureAuth
	I0528 21:31:30.937099 1355836 ubuntu.go:193] setting minikube options for container-runtime
	I0528 21:31:30.937289 1355836 config.go:182] Loaded profile config "addons-504712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:31:30.937400 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:30.954145 1355836 main.go:141] libmachine: Using SSH client type: native
	I0528 21:31:30.954385 1355836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34299 <nil> <nil>}
	I0528 21:31:30.954399 1355836 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:31:31.179523 1355836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:31:31.179549 1355836 machine.go:97] duration metric: took 1.147212839s to provisionDockerMachine
	I0528 21:31:31.179561 1355836 client.go:171] duration metric: took 9.821888429s to LocalClient.Create
	I0528 21:31:31.179575 1355836 start.go:167] duration metric: took 9.821959017s to libmachine.API.Create "addons-504712"
	I0528 21:31:31.179583 1355836 start.go:293] postStartSetup for "addons-504712" (driver="docker")
	I0528 21:31:31.179599 1355836 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:31:31.179667 1355836 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:31:31.179721 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:31.196693 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:31:31.290895 1355836 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:31:31.293870 1355836 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0528 21:31:31.293909 1355836 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0528 21:31:31.293920 1355836 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0528 21:31:31.293927 1355836 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0528 21:31:31.293937 1355836 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1349783/.minikube/addons for local assets ...
	I0528 21:31:31.294013 1355836 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1349783/.minikube/files for local assets ...
	I0528 21:31:31.294067 1355836 start.go:296] duration metric: took 114.473166ms for postStartSetup
	I0528 21:31:31.294381 1355836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-504712
	I0528 21:31:31.311810 1355836 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/config.json ...
	I0528 21:31:31.312101 1355836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 21:31:31.312153 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:31.329124 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:31:31.415086 1355836 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0528 21:31:31.420029 1355836 start.go:128] duration metric: took 10.065765363s to createHost
	I0528 21:31:31.420054 1355836 start.go:83] releasing machines lock for "addons-504712", held for 10.065900941s
	I0528 21:31:31.420134 1355836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-504712
	I0528 21:31:31.436280 1355836 ssh_runner.go:195] Run: cat /version.json
	I0528 21:31:31.436361 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:31.436630 1355836 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:31:31.436691 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:31:31.454910 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:31:31.466538 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:31:31.545515 1355836 ssh_runner.go:195] Run: systemctl --version
	I0528 21:31:31.670589 1355836 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:31:31.808978 1355836 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 21:31:31.812905 1355836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:31:31.832336 1355836 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0528 21:31:31.832412 1355836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:31:31.864776 1355836 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0528 21:31:31.864803 1355836 start.go:494] detecting cgroup driver to use...
	I0528 21:31:31.864837 1355836 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0528 21:31:31.864889 1355836 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:31:31.881256 1355836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:31:31.891873 1355836 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:31:31.891934 1355836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:31:31.905610 1355836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:31:31.920099 1355836 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:31:32.012711 1355836 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:31:32.109248 1355836 docker.go:233] disabling docker service ...
	I0528 21:31:32.109325 1355836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:31:32.129383 1355836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:31:32.141824 1355836 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:31:32.235118 1355836 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:31:32.328467 1355836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:31:32.340976 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:31:32.358434 1355836 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 21:31:32.358527 1355836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:32.368988 1355836 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:31:32.369105 1355836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:32.379900 1355836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:32.389352 1355836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:32.399403 1355836 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:31:32.408194 1355836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:32.417688 1355836 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:32.432977 1355836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:32.442766 1355836 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:31:32.451589 1355836 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:31:32.460415 1355836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:31:32.542937 1355836 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:31:32.646817 1355836 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:31:32.646951 1355836 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:31:32.650442 1355836 start.go:562] Will wait 60s for crictl version
	I0528 21:31:32.650541 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:31:32.653850 1355836 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:31:32.692108 1355836 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0528 21:31:32.692261 1355836 ssh_runner.go:195] Run: crio --version
	I0528 21:31:32.729043 1355836 ssh_runner.go:195] Run: crio --version
	I0528 21:31:32.772142 1355836 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.24.6 ...
	I0528 21:31:32.774138 1355836 cli_runner.go:164] Run: docker network inspect addons-504712 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0528 21:31:32.788708 1355836 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0528 21:31:32.792224 1355836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:31:32.802778 1355836 kubeadm.go:877] updating cluster {Name:addons-504712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-504712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 21:31:32.802896 1355836 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:31:32.802960 1355836 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:31:32.874662 1355836 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:31:32.874685 1355836 crio.go:433] Images already preloaded, skipping extraction
	I0528 21:31:32.874741 1355836 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:31:32.916262 1355836 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:31:32.916283 1355836 cache_images.go:84] Images are preloaded, skipping loading
	I0528 21:31:32.916292 1355836 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 crio true true} ...
	I0528 21:31:32.916394 1355836 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-504712 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-504712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 21:31:32.916481 1355836 ssh_runner.go:195] Run: crio config
	I0528 21:31:32.967208 1355836 cni.go:84] Creating CNI manager for ""
	I0528 21:31:32.967271 1355836 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0528 21:31:32.967297 1355836 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 21:31:32.967322 1355836 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-504712 NodeName:addons-504712 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 21:31:32.967466 1355836 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-504712"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 21:31:32.967540 1355836 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 21:31:32.976306 1355836 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 21:31:32.976377 1355836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 21:31:32.984814 1355836 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0528 21:31:33.005491 1355836 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 21:31:33.025315 1355836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0528 21:31:33.043351 1355836 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0528 21:31:33.046765 1355836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:31:33.057474 1355836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:31:33.139311 1355836 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:31:33.153488 1355836 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712 for IP: 192.168.49.2
	I0528 21:31:33.153507 1355836 certs.go:194] generating shared ca certs ...
	I0528 21:31:33.153524 1355836 certs.go:226] acquiring lock for ca certs: {Name:mk3b01431a293453662fa80a6161920f23c6c736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:33.154117 1355836 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.key
	I0528 21:31:33.631690 1355836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.crt ...
	I0528 21:31:33.631722 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.crt: {Name:mkc01af482e04252e8a6c75b788228b3ac6e96f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:33.631920 1355836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.key ...
	I0528 21:31:33.631938 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.key: {Name:mk2f271def928866fcca6ed23a4f3348d3f75bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:33.632034 1355836 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.key
	I0528 21:31:34.346824 1355836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.crt ...
	I0528 21:31:34.346859 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.crt: {Name:mk77a18aabf743cb34ab7b26a8e82ac7fae4a46f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:34.347570 1355836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.key ...
	I0528 21:31:34.347586 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.key: {Name:mkca254a7fa65d1c5bd938defe82ebb6eb5a889c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:34.348129 1355836 certs.go:256] generating profile certs ...
	I0528 21:31:34.348197 1355836 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.key
	I0528 21:31:34.348216 1355836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt with IP's: []
	I0528 21:31:34.995925 1355836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt ...
	I0528 21:31:34.995955 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: {Name:mk6b9e61238b6af500ff68f79693da49d282f1e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:34.996154 1355836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.key ...
	I0528 21:31:34.996167 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.key: {Name:mk3a85eda1c7d1272e9add9aa5b7e3909a551fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:34.996252 1355836 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.key.251e9af4
	I0528 21:31:34.996273 1355836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.crt.251e9af4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0528 21:31:35.791823 1355836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.crt.251e9af4 ...
	I0528 21:31:35.791859 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.crt.251e9af4: {Name:mk365cd1516347d909b57c47c42d612209b5e1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:35.792447 1355836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.key.251e9af4 ...
	I0528 21:31:35.792467 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.key.251e9af4: {Name:mkaa8f3881f76cafcd1dec37848671f41f0b9728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:35.792615 1355836 certs.go:381] copying /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.crt.251e9af4 -> /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.crt
	I0528 21:31:35.792703 1355836 certs.go:385] copying /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.key.251e9af4 -> /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.key
	I0528 21:31:35.792763 1355836 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.key
	I0528 21:31:35.792784 1355836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.crt with IP's: []
	I0528 21:31:36.191454 1355836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.crt ...
	I0528 21:31:36.191486 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.crt: {Name:mk30a886ec00f9560d299af22afab40b8fde72e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:36.192103 1355836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.key ...
	I0528 21:31:36.192122 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.key: {Name:mka8f3fad8ffa49d497c8786b8a3e7dfbf7d378f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:36.192324 1355836 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca-key.pem (1679 bytes)
	I0528 21:31:36.192368 1355836 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem (1082 bytes)
	I0528 21:31:36.192397 1355836 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/cert.pem (1123 bytes)
	I0528 21:31:36.192427 1355836 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/key.pem (1675 bytes)
	I0528 21:31:36.193069 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 21:31:36.218556 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 21:31:36.243527 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 21:31:36.271607 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 21:31:36.298769 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0528 21:31:36.324045 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0528 21:31:36.348902 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 21:31:36.373028 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 21:31:36.397059 1355836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 21:31:36.420915 1355836 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 21:31:36.439109 1355836 ssh_runner.go:195] Run: openssl version
	I0528 21:31:36.444411 1355836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 21:31:36.453758 1355836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:31:36.457248 1355836 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 21:31 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:31:36.457347 1355836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:31:36.464264 1355836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 21:31:36.473977 1355836 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:31:36.477551 1355836 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 21:31:36.477648 1355836 kubeadm.go:391] StartCluster: {Name:addons-504712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-504712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:31:36.477750 1355836 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 21:31:36.477812 1355836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:31:36.515349 1355836 cri.go:89] found id: ""
	I0528 21:31:36.515417 1355836 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 21:31:36.524318 1355836 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 21:31:36.533088 1355836 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0528 21:31:36.533208 1355836 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:31:36.542117 1355836 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:31:36.542140 1355836 kubeadm.go:156] found existing configuration files:
	
	I0528 21:31:36.542212 1355836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:31:36.550881 1355836 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:31:36.550993 1355836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:31:36.559608 1355836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:31:36.568227 1355836 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:31:36.568337 1355836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:31:36.576733 1355836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:31:36.585440 1355836 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:31:36.585533 1355836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:31:36.594209 1355836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:31:36.602829 1355836 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:31:36.602951 1355836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:31:36.611536 1355836 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0528 21:31:36.657742 1355836 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 21:31:36.657887 1355836 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 21:31:36.697841 1355836 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0528 21:31:36.697957 1355836 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1062-aws
	I0528 21:31:36.698015 1355836 kubeadm.go:309] OS: Linux
	I0528 21:31:36.698099 1355836 kubeadm.go:309] CGROUPS_CPU: enabled
	I0528 21:31:36.698161 1355836 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0528 21:31:36.698232 1355836 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0528 21:31:36.698294 1355836 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0528 21:31:36.698362 1355836 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0528 21:31:36.698430 1355836 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0528 21:31:36.698497 1355836 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0528 21:31:36.698560 1355836 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0528 21:31:36.698619 1355836 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0528 21:31:36.761080 1355836 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 21:31:36.761291 1355836 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 21:31:36.761438 1355836 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 21:31:36.979169 1355836 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 21:31:36.984387 1355836 out.go:204]   - Generating certificates and keys ...
	I0528 21:31:36.984583 1355836 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 21:31:36.984698 1355836 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 21:31:37.779367 1355836 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 21:31:38.070410 1355836 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0528 21:31:38.823189 1355836 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0528 21:31:39.578654 1355836 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0528 21:31:39.829168 1355836 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0528 21:31:39.829483 1355836 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-504712 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0528 21:31:40.375507 1355836 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0528 21:31:40.375853 1355836 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-504712 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0528 21:31:41.102455 1355836 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 21:31:41.324553 1355836 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 21:31:42.529890 1355836 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0528 21:31:42.530355 1355836 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 21:31:43.674110 1355836 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 21:31:44.104765 1355836 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 21:31:44.597127 1355836 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 21:31:44.823223 1355836 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 21:31:46.054941 1355836 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 21:31:46.055624 1355836 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 21:31:46.060414 1355836 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 21:31:46.062828 1355836 out.go:204]   - Booting up control plane ...
	I0528 21:31:46.062928 1355836 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 21:31:46.063003 1355836 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 21:31:46.063074 1355836 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 21:31:46.073402 1355836 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 21:31:46.074394 1355836 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 21:31:46.074634 1355836 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 21:31:46.165083 1355836 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 21:31:46.165169 1355836 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 21:31:47.165950 1355836 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.000859149s
	I0528 21:31:47.166060 1355836 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 21:31:53.167864 1355836 kubeadm.go:309] [api-check] The API server is healthy after 6.001986229s
	I0528 21:31:53.187019 1355836 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 21:31:53.199104 1355836 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 21:31:53.227782 1355836 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 21:31:53.227976 1355836 kubeadm.go:309] [mark-control-plane] Marking the node addons-504712 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 21:31:53.238637 1355836 kubeadm.go:309] [bootstrap-token] Using token: nlgfel.bhcj8g7dheyimwds
	I0528 21:31:53.240691 1355836 out.go:204]   - Configuring RBAC rules ...
	I0528 21:31:53.240847 1355836 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 21:31:53.245362 1355836 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 21:31:53.252933 1355836 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 21:31:53.256250 1355836 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 21:31:53.259911 1355836 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 21:31:53.265937 1355836 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 21:31:53.574848 1355836 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 21:31:54.024644 1355836 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 21:31:54.575594 1355836 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 21:31:54.577029 1355836 kubeadm.go:309] 
	I0528 21:31:54.577116 1355836 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 21:31:54.577128 1355836 kubeadm.go:309] 
	I0528 21:31:54.577218 1355836 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 21:31:54.577229 1355836 kubeadm.go:309] 
	I0528 21:31:54.577254 1355836 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 21:31:54.577318 1355836 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 21:31:54.577392 1355836 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 21:31:54.577400 1355836 kubeadm.go:309] 
	I0528 21:31:54.577469 1355836 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 21:31:54.577479 1355836 kubeadm.go:309] 
	I0528 21:31:54.577541 1355836 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 21:31:54.577548 1355836 kubeadm.go:309] 
	I0528 21:31:54.577627 1355836 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 21:31:54.577711 1355836 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 21:31:54.577794 1355836 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 21:31:54.577806 1355836 kubeadm.go:309] 
	I0528 21:31:54.577896 1355836 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 21:31:54.577972 1355836 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 21:31:54.577979 1355836 kubeadm.go:309] 
	I0528 21:31:54.578086 1355836 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token nlgfel.bhcj8g7dheyimwds \
	I0528 21:31:54.578208 1355836 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dff89f9d96955ea12e5a34678503b154cb1ba84632124852cf6ec75aeb79db1c \
	I0528 21:31:54.578235 1355836 kubeadm.go:309] 	--control-plane 
	I0528 21:31:54.578243 1355836 kubeadm.go:309] 
	I0528 21:31:54.578327 1355836 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 21:31:54.578340 1355836 kubeadm.go:309] 
	I0528 21:31:54.578432 1355836 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token nlgfel.bhcj8g7dheyimwds \
	I0528 21:31:54.578558 1355836 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dff89f9d96955ea12e5a34678503b154cb1ba84632124852cf6ec75aeb79db1c 
	I0528 21:31:54.582422 1355836 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1062-aws\n", err: exit status 1
	I0528 21:31:54.582552 1355836 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:31:54.582575 1355836 cni.go:84] Creating CNI manager for ""
	I0528 21:31:54.582583 1355836 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0528 21:31:54.586664 1355836 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0528 21:31:54.588997 1355836 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0528 21:31:54.593686 1355836 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0528 21:31:54.593705 1355836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0528 21:31:54.612374 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0528 21:31:54.877216 1355836 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 21:31:54.877289 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:54.877415 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-504712 minikube.k8s.io/updated_at=2024_05_28T21_31_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=addons-504712 minikube.k8s.io/primary=true
	I0528 21:31:55.022813 1355836 ops.go:34] apiserver oom_adj: -16
	I0528 21:31:55.022925 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:55.523579 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:56.023735 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:56.523147 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:57.023068 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:57.523275 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:58.023474 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:58.524041 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:59.023246 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:31:59.523564 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:00.042514 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:00.523641 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:01.024026 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:01.523947 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:02.023013 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:02.523833 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:03.023667 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:03.523985 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:04.023000 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:04.523753 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:05.023563 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:05.523856 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:06.024051 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:06.523072 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:07.023021 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:07.523007 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:08.023608 1355836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 21:32:08.134918 1355836 kubeadm.go:1107] duration metric: took 13.257688459s to wait for elevateKubeSystemPrivileges
	W0528 21:32:08.134948 1355836 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0528 21:32:08.134956 1355836 kubeadm.go:393] duration metric: took 31.657312362s to StartCluster
	I0528 21:32:08.134971 1355836 settings.go:142] acquiring lock: {Name:mk3ead4661b05edfaa64061283a93c6a76969cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:32:08.135569 1355836 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-1349783/kubeconfig
	I0528 21:32:08.136033 1355836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/kubeconfig: {Name:mkaf5e1534f034576a412c2bb12acf3530c82fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:32:08.136234 1355836 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 21:32:08.138377 1355836 out.go:177] * Verifying Kubernetes components...
	I0528 21:32:08.136322 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0528 21:32:08.136483 1355836 config.go:182] Loaded profile config "addons-504712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:32:08.136491 1355836 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0528 21:32:08.140504 1355836 addons.go:69] Setting yakd=true in profile "addons-504712"
	I0528 21:32:08.140524 1355836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:32:08.140537 1355836 addons.go:234] Setting addon yakd=true in "addons-504712"
	I0528 21:32:08.140569 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.140630 1355836 addons.go:69] Setting ingress-dns=true in profile "addons-504712"
	I0528 21:32:08.140672 1355836 addons.go:234] Setting addon ingress-dns=true in "addons-504712"
	I0528 21:32:08.140705 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.141048 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.141140 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.141630 1355836 addons.go:69] Setting inspektor-gadget=true in profile "addons-504712"
	I0528 21:32:08.141660 1355836 addons.go:234] Setting addon inspektor-gadget=true in "addons-504712"
	I0528 21:32:08.141685 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.141745 1355836 addons.go:69] Setting cloud-spanner=true in profile "addons-504712"
	I0528 21:32:08.141767 1355836 addons.go:234] Setting addon cloud-spanner=true in "addons-504712"
	I0528 21:32:08.141787 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.142172 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.142176 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.145662 1355836 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-504712"
	I0528 21:32:08.145737 1355836 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-504712"
	I0528 21:32:08.145770 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.146394 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.152142 1355836 addons.go:69] Setting metrics-server=true in profile "addons-504712"
	I0528 21:32:08.152292 1355836 addons.go:234] Setting addon metrics-server=true in "addons-504712"
	I0528 21:32:08.152361 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.152900 1355836 addons.go:69] Setting default-storageclass=true in profile "addons-504712"
	I0528 21:32:08.152967 1355836 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-504712"
	I0528 21:32:08.153234 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.156396 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.187382 1355836 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-504712"
	I0528 21:32:08.187490 1355836 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-504712"
	I0528 21:32:08.187567 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.187705 1355836 addons.go:69] Setting gcp-auth=true in profile "addons-504712"
	I0528 21:32:08.187884 1355836 addons.go:69] Setting registry=true in profile "addons-504712"
	I0528 21:32:08.187905 1355836 addons.go:234] Setting addon registry=true in "addons-504712"
	I0528 21:32:08.187928 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.188344 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.188503 1355836 mustload.go:65] Loading cluster: addons-504712
	I0528 21:32:08.188702 1355836 config.go:182] Loaded profile config "addons-504712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:32:08.189081 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.212716 1355836 addons.go:69] Setting storage-provisioner=true in profile "addons-504712"
	I0528 21:32:08.212763 1355836 addons.go:234] Setting addon storage-provisioner=true in "addons-504712"
	I0528 21:32:08.212805 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.213238 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.215546 1355836 addons.go:69] Setting ingress=true in profile "addons-504712"
	I0528 21:32:08.215628 1355836 addons.go:234] Setting addon ingress=true in "addons-504712"
	I0528 21:32:08.215701 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.216155 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.226692 1355836 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-504712"
	I0528 21:32:08.226801 1355836 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-504712"
	I0528 21:32:08.227423 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.235485 1355836 addons.go:234] Setting addon default-storageclass=true in "addons-504712"
	I0528 21:32:08.235526 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.235924 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.236234 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.267689 1355836 addons.go:69] Setting volcano=true in profile "addons-504712"
	I0528 21:32:08.267744 1355836 addons.go:234] Setting addon volcano=true in "addons-504712"
	I0528 21:32:08.267784 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.268198 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.289691 1355836 addons.go:69] Setting volumesnapshots=true in profile "addons-504712"
	I0528 21:32:08.289746 1355836 addons.go:234] Setting addon volumesnapshots=true in "addons-504712"
	I0528 21:32:08.289788 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.290261 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.296071 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0528 21:32:08.312662 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0528 21:32:08.321858 1355836 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.1
	I0528 21:32:08.326714 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0528 21:32:08.326783 1355836 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0528 21:32:08.326882 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.326574 1355836 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0528 21:32:08.326580 1355836 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0528 21:32:08.326585 1355836 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0528 21:32:08.326589 1355836 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0528 21:32:08.326643 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.372100 1355836 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0528 21:32:08.376954 1355836 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 21:32:08.379490 1355836 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 21:32:08.380522 1355836 out.go:177]   - Using image docker.io/registry:2.8.3
	I0528 21:32:08.389807 1355836 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0528 21:32:08.386622 1355836 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0528 21:32:08.380531 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	W0528 21:32:08.391432 1355836 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0528 21:32:08.393209 1355836 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-504712"
	I0528 21:32:08.394264 1355836 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0528 21:32:08.394276 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0528 21:32:08.396959 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.397086 1355836 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:32:08.404566 1355836 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 21:32:08.404589 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 21:32:08.404658 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.397277 1355836 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0528 21:32:08.411808 1355836 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0528 21:32:08.411880 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.397285 1355836 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0528 21:32:08.397374 1355836 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0528 21:32:08.397413 1355836 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0528 21:32:08.397456 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:08.397466 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0528 21:32:08.419020 1355836 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 21:32:08.419212 1355836 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0528 21:32:08.419229 1355836 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0528 21:32:08.419235 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0528 21:32:08.419239 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0528 21:32:08.426211 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.426396 1355836 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 21:32:08.426443 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.429773 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.438968 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0528 21:32:08.436398 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:08.436433 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.441514 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.467073 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0528 21:32:08.469826 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0528 21:32:08.473159 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0528 21:32:08.478451 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0528 21:32:08.480936 1355836 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0528 21:32:08.480963 1355836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0528 21:32:08.481033 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.517310 1355836 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0528 21:32:08.517332 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0528 21:32:08.517399 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.552447 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.578769 1355836 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0528 21:32:08.585652 1355836 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0528 21:32:08.585688 1355836 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0528 21:32:08.585764 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.592179 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.592796 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.600482 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.600820 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.645983 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.674549 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.690388 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.702148 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.716348 1355836 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0528 21:32:08.713576 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.714884 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.720884 1355836 out.go:177]   - Using image docker.io/busybox:stable
	I0528 21:32:08.723196 1355836 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0528 21:32:08.723215 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0528 21:32:08.723280 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:08.732062 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.752002 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:08.890961 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0528 21:32:08.891036 1355836 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0528 21:32:08.943611 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 21:32:08.947527 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0528 21:32:08.992656 1355836 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0528 21:32:08.992717 1355836 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0528 21:32:09.006228 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0528 21:32:09.051341 1355836 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0528 21:32:09.051412 1355836 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0528 21:32:09.107775 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0528 21:32:09.107846 1355836 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0528 21:32:09.131110 1355836 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0528 21:32:09.131188 1355836 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0528 21:32:09.144953 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0528 21:32:09.150332 1355836 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.008685325s)
	I0528 21:32:09.150578 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0528 21:32:09.150829 1355836 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.010287934s)
	I0528 21:32:09.150918 1355836 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:32:09.155569 1355836 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0528 21:32:09.155647 1355836 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0528 21:32:09.218307 1355836 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0528 21:32:09.218383 1355836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0528 21:32:09.232545 1355836 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0528 21:32:09.232618 1355836 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0528 21:32:09.236346 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 21:32:09.243878 1355836 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0528 21:32:09.243949 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0528 21:32:09.266615 1355836 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0528 21:32:09.266685 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0528 21:32:09.274215 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0528 21:32:09.301825 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0528 21:32:09.314657 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0528 21:32:09.314717 1355836 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0528 21:32:09.340791 1355836 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0528 21:32:09.340866 1355836 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0528 21:32:09.439168 1355836 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0528 21:32:09.439190 1355836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0528 21:32:09.484833 1355836 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0528 21:32:09.484853 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0528 21:32:09.496990 1355836 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0528 21:32:09.497019 1355836 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0528 21:32:09.528527 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0528 21:32:09.531963 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0528 21:32:09.532040 1355836 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0528 21:32:09.605260 1355836 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0528 21:32:09.605331 1355836 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0528 21:32:09.636231 1355836 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0528 21:32:09.636311 1355836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0528 21:32:09.723955 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0528 21:32:09.730267 1355836 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 21:32:09.730337 1355836 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0528 21:32:09.748541 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0528 21:32:09.748617 1355836 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0528 21:32:09.810719 1355836 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0528 21:32:09.810789 1355836 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0528 21:32:09.843781 1355836 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0528 21:32:09.843854 1355836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0528 21:32:09.948072 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 21:32:09.969720 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0528 21:32:09.969791 1355836 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0528 21:32:09.974277 1355836 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0528 21:32:09.974345 1355836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0528 21:32:09.977545 1355836 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 21:32:09.977640 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0528 21:32:10.086243 1355836 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0528 21:32:10.086334 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0528 21:32:10.089627 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 21:32:10.137647 1355836 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0528 21:32:10.137711 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0528 21:32:10.211945 1355836 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0528 21:32:10.212015 1355836 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0528 21:32:10.278145 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0528 21:32:10.319675 1355836 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0528 21:32:10.319747 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0528 21:32:10.422775 1355836 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0528 21:32:10.422849 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0528 21:32:10.483220 1355836 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0528 21:32:10.483290 1355836 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0528 21:32:10.503317 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0528 21:32:13.749217 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.805504042s)
	I0528 21:32:13.749282 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.801681082s)
	I0528 21:32:13.749315 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.743018111s)
	I0528 21:32:14.774198 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.629159516s)
	I0528 21:32:14.774244 1355836 addons.go:475] Verifying addon ingress=true in "addons-504712"
	I0528 21:32:14.776600 1355836 out.go:177] * Verifying ingress addon...
	I0528 21:32:14.774470 1355836 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.623856224s)
	I0528 21:32:14.774487 1355836 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.62354689s)
	I0528 21:32:14.774517 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.538112697s)
	I0528 21:32:14.774536 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.500262016s)
	I0528 21:32:14.774574 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.472684045s)
	I0528 21:32:14.774604 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.246009632s)
	I0528 21:32:14.774630 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.050607423s)
	I0528 21:32:14.774708 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.826555614s)
	I0528 21:32:14.778562 1355836 addons.go:475] Verifying addon metrics-server=true in "addons-504712"
	I0528 21:32:14.779414 1355836 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0528 21:32:14.779591 1355836 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0528 21:32:14.780939 1355836 node_ready.go:35] waiting up to 6m0s for node "addons-504712" to be "Ready" ...
	I0528 21:32:14.783216 1355836 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-504712 service yakd-dashboard -n yakd-dashboard
	
	I0528 21:32:14.781345 1355836 addons.go:475] Verifying addon registry=true in "addons-504712"
	I0528 21:32:14.785681 1355836 out.go:177] * Verifying registry addon...
	I0528 21:32:14.788917 1355836 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0528 21:32:14.794327 1355836 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0528 21:32:14.794355 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:14.823944 1355836 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0528 21:32:14.823966 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0528 21:32:14.824124 1355836 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0528 21:32:14.876326 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.786597968s)
	W0528 21:32:14.876380 1355836 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0528 21:32:14.876400 1355836 retry.go:31] will retry after 363.490839ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0528 21:32:14.876475 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.598257267s)
	I0528 21:32:15.125247 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.621820192s)
	I0528 21:32:15.125289 1355836 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-504712"
	I0528 21:32:15.128289 1355836 out.go:177] * Verifying csi-hostpath-driver addon...
	I0528 21:32:15.131579 1355836 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0528 21:32:15.145505 1355836 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0528 21:32:15.145542 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:15.240685 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 21:32:15.284266 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:15.285190 1355836 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-504712" context rescaled to 1 replicas
	I0528 21:32:15.293510 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:15.636651 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:15.785414 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:15.793579 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:15.850284 1355836 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0528 21:32:15.850367 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:15.865715 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:15.968508 1355836 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0528 21:32:15.988453 1355836 addons.go:234] Setting addon gcp-auth=true in "addons-504712"
	I0528 21:32:15.988505 1355836 host.go:66] Checking if "addons-504712" exists ...
	I0528 21:32:15.988944 1355836 cli_runner.go:164] Run: docker container inspect addons-504712 --format={{.State.Status}}
	I0528 21:32:16.009784 1355836 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0528 21:32:16.009841 1355836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-504712
	I0528 21:32:16.029793 1355836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34299 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/addons-504712/id_rsa Username:docker}
	I0528 21:32:16.136197 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:16.283728 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:16.293026 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:16.637597 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:16.784526 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:16.787473 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:16.793198 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:17.136371 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:17.284722 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:17.304643 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:17.655401 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:17.819102 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:17.820596 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:18.093319 1355836 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.083503029s)
	I0528 21:32:18.095807 1355836 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 21:32:18.093572 1355836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.852847101s)
	I0528 21:32:18.097665 1355836 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0528 21:32:18.099303 1355836 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0528 21:32:18.099323 1355836 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0528 21:32:18.136594 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:18.155878 1355836 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0528 21:32:18.155906 1355836 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0528 21:32:18.182212 1355836 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0528 21:32:18.182233 1355836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0528 21:32:18.212564 1355836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0528 21:32:18.283672 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:18.302234 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:18.635679 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:18.793788 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:18.794234 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:18.804984 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:18.925315 1355836 addons.go:475] Verifying addon gcp-auth=true in "addons-504712"
	I0528 21:32:18.927988 1355836 out.go:177] * Verifying gcp-auth addon...
	I0528 21:32:18.931332 1355836 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0528 21:32:18.937813 1355836 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0528 21:32:18.937833 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:19.136007 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:19.287381 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:19.292955 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:19.436756 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:19.636093 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:19.783410 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:19.793192 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:19.937436 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:20.138337 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:20.284850 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:20.293224 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:20.435140 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:20.635845 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:20.791712 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:20.799219 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:20.935243 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:21.136135 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:21.284298 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:21.284714 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:21.292939 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:21.435558 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:21.638755 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:21.784447 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:21.792810 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:21.935297 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:22.138113 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:22.287006 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:22.293292 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:22.435366 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:22.636105 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:22.783747 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:22.793542 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:22.935578 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:23.136636 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:23.284519 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:23.285513 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:23.293525 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:23.435589 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:23.635759 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:23.784367 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:23.792734 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:23.935172 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:24.136920 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:24.283593 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:24.293082 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:24.436476 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:24.637023 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:24.786319 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:24.793318 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:24.935728 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:25.139711 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:25.284902 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:25.285395 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:25.292499 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:25.435691 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:25.636474 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:25.783514 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:25.793804 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:25.935157 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:26.136510 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:26.283545 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:26.292909 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:26.435000 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:26.635724 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:26.785811 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:26.792916 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:26.934909 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:27.135504 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:27.284299 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:27.294333 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:27.435492 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:27.635906 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:27.783468 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:27.785261 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:27.792843 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:27.935049 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:28.136225 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:28.283359 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:28.292763 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:28.435251 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:28.636393 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:28.783911 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:28.793793 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:28.934750 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:29.136102 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:29.283804 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:29.292957 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:29.435413 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:29.636278 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:29.784085 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:29.785964 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:29.793205 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:29.935179 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:30.137226 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:30.284541 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:30.293743 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:30.435317 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:30.636756 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:30.783862 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:30.793241 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:30.935146 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:31.136043 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:31.283590 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:31.293074 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:31.435050 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:31.635984 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:31.784669 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:31.792844 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:31.935276 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:32.136465 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:32.283539 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:32.284797 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:32.294005 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:32.435071 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:32.637705 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:32.784257 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:32.793600 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:32.937064 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:33.135803 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:33.285354 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:33.294148 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:33.435217 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:33.635851 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:33.785811 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:33.792704 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:33.935122 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:34.136533 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:34.283481 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:34.285180 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:34.292987 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:34.435061 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:34.636276 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:34.783778 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:34.793177 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:34.935366 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:35.137225 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:35.283127 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:35.293028 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:35.435362 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:35.636378 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:35.784186 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:35.792865 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:35.934811 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:36.135963 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:36.284939 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:36.285486 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:36.292667 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:36.434783 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:36.635784 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:36.784751 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:36.792757 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:36.934570 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:37.136592 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:37.284893 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:37.293378 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:37.435190 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:37.641465 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:37.786198 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:37.792891 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:37.935019 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:38.135663 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:38.283170 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:38.292816 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:38.434778 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:38.635992 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:38.783491 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:38.785056 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:38.793136 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:38.935675 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:39.135881 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:39.283877 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:39.292897 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:39.435113 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:39.638407 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:39.783984 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:39.792731 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:39.936982 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:40.137497 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:40.285144 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:40.295044 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:40.437787 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:40.636728 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:40.783677 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:40.792523 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:40.934618 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:41.145328 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:41.284389 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:41.284816 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:41.293260 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:41.435763 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:41.636312 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:41.783809 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:41.792868 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:41.934866 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:42.145534 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:42.284629 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:42.294189 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:42.435051 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:42.636278 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:42.784144 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:42.793388 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:42.935376 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:43.137529 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:43.283978 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:43.284449 1355836 node_ready.go:53] node "addons-504712" has status "Ready":"False"
	I0528 21:32:43.293319 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:43.435355 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:43.636691 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:43.805441 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:43.805978 1355836 node_ready.go:49] node "addons-504712" has status "Ready":"True"
	I0528 21:32:43.806000 1355836 node_ready.go:38] duration metric: took 29.025038928s for node "addons-504712" to be "Ready" ...
	I0528 21:32:43.806009 1355836 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:32:43.810514 1355836 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0528 21:32:43.810544 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:43.837519 1355836 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5qs7n" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:43.945879 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:44.160134 1355836 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0528 21:32:44.160163 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:44.407117 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:44.412496 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:44.459790 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:44.645908 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:44.787656 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:44.795256 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:44.936254 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:45.151954 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:45.286448 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:45.296321 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:45.348718 1355836 pod_ready.go:92] pod "coredns-7db6d8ff4d-5qs7n" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:45.348746 1355836 pod_ready.go:81] duration metric: took 1.511192536s for pod "coredns-7db6d8ff4d-5qs7n" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.348766 1355836 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.357636 1355836 pod_ready.go:92] pod "etcd-addons-504712" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:45.357663 1355836 pod_ready.go:81] duration metric: took 8.888211ms for pod "etcd-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.357679 1355836 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.365407 1355836 pod_ready.go:92] pod "kube-apiserver-addons-504712" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:45.365435 1355836 pod_ready.go:81] duration metric: took 7.747924ms for pod "kube-apiserver-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.365448 1355836 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.371850 1355836 pod_ready.go:92] pod "kube-controller-manager-addons-504712" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:45.371879 1355836 pod_ready.go:81] duration metric: took 6.423149ms for pod "kube-controller-manager-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.371894 1355836 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kdmkz" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.385799 1355836 pod_ready.go:92] pod "kube-proxy-kdmkz" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:45.385825 1355836 pod_ready.go:81] duration metric: took 13.923415ms for pod "kube-proxy-kdmkz" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.385838 1355836 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.435806 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:45.636944 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:45.785027 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:45.786329 1355836 pod_ready.go:92] pod "kube-scheduler-addons-504712" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:45.786353 1355836 pod_ready.go:81] duration metric: took 400.506788ms for pod "kube-scheduler-addons-504712" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.786365 1355836 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:45.794138 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:45.936054 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:46.137635 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:46.284192 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:46.293796 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:46.435774 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:46.640536 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:46.784807 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:46.793980 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:46.936463 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:47.137831 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:47.284436 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:47.294369 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:47.437152 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:47.640920 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:47.785859 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:47.796210 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:47.799205 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:47.944611 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:48.137546 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:48.284940 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:48.303874 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:48.437960 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:48.640364 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:48.787072 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:48.810622 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:48.936158 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:49.144863 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:49.306461 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:49.307759 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:49.436778 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:49.639260 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:49.784424 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:49.802570 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:49.807418 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:49.934984 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:50.139586 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:50.284349 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:50.307340 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:50.437580 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:50.639623 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:50.787713 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:50.796683 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:50.939020 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:51.137537 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:51.283686 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:51.294529 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:51.434891 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:51.636924 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:51.784150 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:51.796762 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:51.935658 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:52.138533 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:52.284790 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:52.308280 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:52.308625 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:52.435217 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:52.649901 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:52.784528 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:52.803805 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:52.935952 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:53.138309 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:53.284266 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:53.316830 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:53.435540 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:53.637264 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:53.786293 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:53.815588 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:53.936195 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:54.143110 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:54.284498 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:54.297377 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:54.435944 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:54.639239 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:54.785544 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:54.798928 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:54.799614 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:54.935609 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:55.138436 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:55.285254 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:55.306802 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:55.435594 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:55.638169 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:55.784653 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:55.796742 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:55.934999 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:56.137457 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:56.284670 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:56.298013 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:56.435854 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:56.637236 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:56.784031 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:56.795320 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:56.937019 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:57.137798 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:57.284872 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:57.293193 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:57.297469 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:57.435329 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:57.637621 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:57.784349 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:57.796807 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:57.935442 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:58.137170 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:58.303260 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:58.309039 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:58.436439 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:58.636834 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:58.783635 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:58.792725 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:58.934663 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:59.139933 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:59.284015 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:59.294197 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:59.296695 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:59.450186 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:32:59.636848 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:32:59.784253 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:32:59.798756 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:32:59.935513 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:00.156840 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:00.286987 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:00.302884 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:00.442819 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:00.637123 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:00.785234 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:00.794928 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:00.935736 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:01.137257 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:01.283371 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:01.294330 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:01.435500 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:01.638226 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:01.784393 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:01.794172 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:01.796817 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:01.935352 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:02.158214 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:02.312610 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:02.323765 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:02.436758 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:02.640826 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:02.784464 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:02.796422 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:02.935574 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:03.137995 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:03.286495 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:03.308515 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:03.437185 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:03.638888 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:03.784508 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:03.797252 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:03.936226 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:04.154473 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:04.284371 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:04.298655 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:04.303305 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:04.435420 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:04.638067 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:04.805936 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:04.806533 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:04.936008 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:05.137509 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:05.283814 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:05.296911 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:05.435103 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:05.639468 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:05.785056 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:05.802416 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:05.935387 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:06.137920 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:06.284421 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:06.294198 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:06.437394 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:06.638216 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:06.784334 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:06.792802 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:06.793529 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:06.935668 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:07.137437 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:07.286573 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:07.302608 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:07.436560 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:07.638208 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:07.783435 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:07.801421 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:07.935584 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:08.137542 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:08.284113 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:08.294533 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:08.435298 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:08.637749 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:08.783761 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:08.795830 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:08.935307 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:09.146844 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:09.284545 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:09.325135 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:09.326340 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:09.435369 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:09.637212 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:09.785513 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:09.796498 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:09.936762 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:10.139025 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:10.284142 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:10.295795 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:10.435374 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:10.638546 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:10.783789 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:10.795431 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:10.935157 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:11.137662 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:11.283905 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:11.294511 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:11.435523 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:11.639439 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:11.783773 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:11.794708 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:11.795044 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:11.937919 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:12.137073 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:12.284601 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:12.294292 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:12.435270 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:12.637259 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:12.785021 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:12.800264 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:12.937569 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:13.139493 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:13.284448 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:13.311679 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:13.435952 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:13.638741 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:13.796094 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:13.801471 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:13.804350 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:13.935407 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:14.137192 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:14.288796 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:14.306339 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:14.437721 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:14.640338 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:14.785812 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:14.794566 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:14.936772 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:15.137925 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:15.285668 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:15.297078 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:15.435790 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:15.637702 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:15.785824 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:15.804144 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:15.937733 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:16.139363 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:16.283578 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:16.294115 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:16.307693 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:16.435210 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:16.637891 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:16.786183 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:16.794550 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:16.934478 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:17.137271 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:17.283689 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:17.294473 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:17.435349 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:17.637218 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:17.785974 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:17.799603 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:17.935685 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:18.137212 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:18.284878 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:18.297436 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:18.437667 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:18.639140 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:18.784626 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:18.792788 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:18.794196 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:18.934601 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:19.137858 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:19.284118 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:19.295132 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:19.436802 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:19.637699 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:19.783763 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:19.797718 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:19.940112 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:20.138599 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:20.284941 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:20.294723 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:20.435481 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:20.638176 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:20.784357 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:20.794406 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:20.940879 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:21.145696 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:21.284610 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:21.295320 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:21.295988 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:21.435630 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:21.637863 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:21.784720 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:21.797938 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:21.935521 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:22.137563 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:22.297003 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:22.317072 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:22.435879 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:22.637809 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:22.784794 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:22.794584 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:22.935586 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:23.139059 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:23.285491 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:23.312511 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:23.314120 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:23.436018 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:23.638479 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:23.786334 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:23.795877 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:23.935384 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:24.139316 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:24.285890 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:24.313703 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:24.435565 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:24.637263 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:24.784044 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:24.802334 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:24.938932 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:25.153188 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:25.284260 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:25.297444 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 21:33:25.435031 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:25.637380 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:25.784543 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:25.793943 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:25.794756 1355836 kapi.go:107] duration metric: took 1m11.005836437s to wait for kubernetes.io/minikube-addons=registry ...
	I0528 21:33:25.941894 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:26.136992 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:26.284036 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:26.435632 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:26.638231 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:26.785492 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:26.936384 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:27.137516 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:27.284063 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:27.434834 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:27.639791 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:27.784609 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:27.795721 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:27.943235 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:28.138645 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:28.286879 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:28.436993 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:28.638346 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:28.786900 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:28.937382 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:29.154481 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:29.286623 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:29.439255 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:29.637205 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:29.784651 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:29.935179 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:30.139090 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:30.284093 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:30.293015 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:30.436182 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:30.638009 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:30.785025 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:30.935751 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:31.138182 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:31.285537 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:31.437169 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:31.637337 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:31.784967 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:31.935097 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:32.136919 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:32.283934 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:32.300284 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:32.441778 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:32.637266 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:32.784175 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:32.934968 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:33.136958 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:33.284293 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:33.434797 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:33.637600 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:33.783721 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:33.936697 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:34.136838 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:34.284932 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:34.435322 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:34.638155 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:34.790009 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:34.796864 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:34.934826 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:35.137945 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:35.288361 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:35.435174 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:35.637879 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:35.784093 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:35.935553 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:36.138493 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:36.284182 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:36.435295 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:36.638340 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:36.784794 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:36.936200 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:37.137452 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 21:33:37.284426 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:37.291604 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:37.435417 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:37.637523 1355836 kapi.go:107] duration metric: took 1m22.505939725s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0528 21:33:37.784443 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:37.935551 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:38.283803 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:38.435688 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:38.783455 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:38.935594 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:39.284190 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:39.292911 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:39.436686 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:39.783577 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:39.938183 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:40.283737 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:40.435929 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:40.784852 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:40.936024 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:41.284094 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:41.434840 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:41.784494 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:41.792354 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:41.935317 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:42.285081 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:42.435487 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:42.784302 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:42.935224 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:43.283873 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:43.435374 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:43.784505 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:43.934653 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:44.284206 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:44.293041 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:44.436123 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:44.784674 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:44.935215 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:45.284869 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:45.436205 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:45.784402 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:45.934609 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:46.283919 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:46.293154 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:46.435561 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:46.784806 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:46.939046 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:47.283333 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:47.435018 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:47.784564 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:47.935237 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:48.284605 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:48.435037 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:48.783531 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:48.792346 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:48.935287 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:49.284339 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:49.435642 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:49.794352 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:49.935785 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:50.285042 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:50.435671 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:50.785424 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:50.797045 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:50.936217 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:51.284546 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:51.437445 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:51.785547 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:51.936888 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:52.283737 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:52.435547 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:52.784011 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:52.946010 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:53.283581 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:53.298545 1355836 pod_ready.go:102] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"False"
	I0528 21:33:53.435412 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:53.785329 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:53.935677 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:54.292517 1355836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 21:33:54.308565 1355836 pod_ready.go:92] pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace has status "Ready":"True"
	I0528 21:33:54.308637 1355836 pod_ready.go:81] duration metric: took 1m8.522262577s for pod "metrics-server-c59844bb4-99j6d" in "kube-system" namespace to be "Ready" ...
	I0528 21:33:54.308665 1355836 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-p6z9d" in "kube-system" namespace to be "Ready" ...
	I0528 21:33:54.317195 1355836 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-p6z9d" in "kube-system" namespace has status "Ready":"True"
	I0528 21:33:54.317266 1355836 pod_ready.go:81] duration metric: took 8.58069ms for pod "nvidia-device-plugin-daemonset-p6z9d" in "kube-system" namespace to be "Ready" ...
	I0528 21:33:54.317303 1355836 pod_ready.go:38] duration metric: took 1m10.51126126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:33:54.317344 1355836 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:33:54.317389 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:33:54.317472 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:33:54.371255 1355836 cri.go:89] found id: "e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81"
	I0528 21:33:54.371279 1355836 cri.go:89] found id: ""
	I0528 21:33:54.371287 1355836 logs.go:276] 1 containers: [e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81]
	I0528 21:33:54.371342 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:33:54.374843 1355836 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:33:54.374926 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:33:54.438938 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:54.452048 1355836 cri.go:89] found id: "3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2"
	I0528 21:33:54.452071 1355836 cri.go:89] found id: ""
	I0528 21:33:54.452080 1355836 logs.go:276] 1 containers: [3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2]
	I0528 21:33:54.452133 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:33:54.456003 1355836 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:33:54.456073 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:33:54.505220 1355836 cri.go:89] found id: "0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15"
	I0528 21:33:54.505250 1355836 cri.go:89] found id: ""
	I0528 21:33:54.505257 1355836 logs.go:276] 1 containers: [0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15]
	I0528 21:33:54.505319 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:33:54.509052 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:33:54.509122 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:33:54.555663 1355836 cri.go:89] found id: "5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c"
	I0528 21:33:54.555736 1355836 cri.go:89] found id: ""
	I0528 21:33:54.555759 1355836 logs.go:276] 1 containers: [5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c]
	I0528 21:33:54.555843 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:33:54.560074 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:33:54.560198 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:33:54.608141 1355836 cri.go:89] found id: "a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb"
	I0528 21:33:54.608225 1355836 cri.go:89] found id: ""
	I0528 21:33:54.608247 1355836 logs.go:276] 1 containers: [a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb]
	I0528 21:33:54.608347 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:33:54.613242 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:33:54.613397 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:33:54.666134 1355836 cri.go:89] found id: "caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8"
	I0528 21:33:54.666223 1355836 cri.go:89] found id: ""
	I0528 21:33:54.666247 1355836 logs.go:276] 1 containers: [caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8]
	I0528 21:33:54.666342 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:33:54.672415 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:33:54.672580 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:33:54.722230 1355836 cri.go:89] found id: "b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a"
	I0528 21:33:54.722262 1355836 cri.go:89] found id: ""
	I0528 21:33:54.722270 1355836 logs.go:276] 1 containers: [b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a]
	I0528 21:33:54.722334 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:33:54.727533 1355836 logs.go:123] Gathering logs for kube-apiserver [e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81] ...
	I0528 21:33:54.727560 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81"
	I0528 21:33:54.796601 1355836 kapi.go:107] duration metric: took 1m40.017182978s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0528 21:33:54.800735 1355836 logs.go:123] Gathering logs for kube-scheduler [5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c] ...
	I0528 21:33:54.800757 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c"
	I0528 21:33:54.851302 1355836 logs.go:123] Gathering logs for kube-proxy [a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb] ...
	I0528 21:33:54.851342 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb"
	I0528 21:33:54.888344 1355836 logs.go:123] Gathering logs for kindnet [b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a] ...
	I0528 21:33:54.888371 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a"
	I0528 21:33:54.932069 1355836 logs.go:123] Gathering logs for container status ...
	I0528 21:33:54.932095 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:33:54.936002 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:54.989493 1355836 logs.go:123] Gathering logs for kube-controller-manager [caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8] ...
	I0528 21:33:54.989527 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8"
	I0528 21:33:55.074331 1355836 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:33:55.074368 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:33:55.192611 1355836 logs.go:123] Gathering logs for kubelet ...
	I0528 21:33:55.192654 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 21:33:55.231788 1355836 logs.go:138] Found kubelet problem: May 28 21:32:43 addons-504712 kubelet[1526]: W0528 21:32:43.750886    1526 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	W0528 21:33:55.232004 1355836 logs.go:138] Found kubelet problem: May 28 21:32:43 addons-504712 kubelet[1526]: E0528 21:32:43.750926    1526 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	I0528 21:33:55.278589 1355836 logs.go:123] Gathering logs for dmesg ...
	I0528 21:33:55.278623 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:33:55.297947 1355836 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:33:55.297985 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:33:55.439755 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:55.482367 1355836 logs.go:123] Gathering logs for etcd [3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2] ...
	I0528 21:33:55.482449 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2"
	I0528 21:33:55.561175 1355836 logs.go:123] Gathering logs for coredns [0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15] ...
	I0528 21:33:55.561208 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15"
	I0528 21:33:55.615990 1355836 out.go:304] Setting ErrFile to fd 2...
	I0528 21:33:55.616022 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 21:33:55.616076 1355836 out.go:239] X Problems detected in kubelet:
	W0528 21:33:55.616090 1355836 out.go:239]   May 28 21:32:43 addons-504712 kubelet[1526]: W0528 21:32:43.750886    1526 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	W0528 21:33:55.616097 1355836 out.go:239]   May 28 21:32:43 addons-504712 kubelet[1526]: E0528 21:32:43.750926    1526 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	I0528 21:33:55.616109 1355836 out.go:304] Setting ErrFile to fd 2...
	I0528 21:33:55.616115 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:33:55.935702 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:56.438724 1355836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:33:56.934646 1355836 kapi.go:107] duration metric: took 1m38.003309418s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0528 21:33:56.937270 1355836 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-504712 cluster.
	I0528 21:33:56.939637 1355836 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0528 21:33:56.941662 1355836 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0528 21:33:56.944096 1355836 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, metrics-server, nvidia-device-plugin, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0528 21:33:56.946229 1355836 addons.go:510] duration metric: took 1m48.809728539s for enable addons: enabled=[storage-provisioner cloud-spanner ingress-dns metrics-server nvidia-device-plugin yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0528 21:34:05.616886 1355836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:34:05.630059 1355836 api_server.go:72] duration metric: took 1m57.49379782s to wait for apiserver process to appear ...
	I0528 21:34:05.630088 1355836 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:34:05.630121 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:34:05.630181 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:34:05.667264 1355836 cri.go:89] found id: "e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81"
	I0528 21:34:05.667285 1355836 cri.go:89] found id: ""
	I0528 21:34:05.667293 1355836 logs.go:276] 1 containers: [e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81]
	I0528 21:34:05.667351 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:05.670776 1355836 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:34:05.670846 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:34:05.712939 1355836 cri.go:89] found id: "3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2"
	I0528 21:34:05.712964 1355836 cri.go:89] found id: ""
	I0528 21:34:05.712972 1355836 logs.go:276] 1 containers: [3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2]
	I0528 21:34:05.713028 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:05.716770 1355836 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:34:05.716845 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:34:05.757263 1355836 cri.go:89] found id: "0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15"
	I0528 21:34:05.757285 1355836 cri.go:89] found id: ""
	I0528 21:34:05.757293 1355836 logs.go:276] 1 containers: [0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15]
	I0528 21:34:05.757347 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:05.760708 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:34:05.760774 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:34:05.799571 1355836 cri.go:89] found id: "5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c"
	I0528 21:34:05.799592 1355836 cri.go:89] found id: ""
	I0528 21:34:05.799601 1355836 logs.go:276] 1 containers: [5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c]
	I0528 21:34:05.799660 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:05.803317 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:34:05.803390 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:34:05.847332 1355836 cri.go:89] found id: "a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb"
	I0528 21:34:05.847352 1355836 cri.go:89] found id: ""
	I0528 21:34:05.847360 1355836 logs.go:276] 1 containers: [a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb]
	I0528 21:34:05.847415 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:05.850826 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:34:05.850902 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:34:05.890416 1355836 cri.go:89] found id: "caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8"
	I0528 21:34:05.890443 1355836 cri.go:89] found id: ""
	I0528 21:34:05.890451 1355836 logs.go:276] 1 containers: [caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8]
	I0528 21:34:05.890510 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:05.893871 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:34:05.893943 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:34:05.936497 1355836 cri.go:89] found id: "b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a"
	I0528 21:34:05.936521 1355836 cri.go:89] found id: ""
	I0528 21:34:05.936529 1355836 logs.go:276] 1 containers: [b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a]
	I0528 21:34:05.936586 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:05.940213 1355836 logs.go:123] Gathering logs for dmesg ...
	I0528 21:34:05.940247 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:34:05.960261 1355836 logs.go:123] Gathering logs for kube-apiserver [e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81] ...
	I0528 21:34:05.960290 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81"
	I0528 21:34:06.018297 1355836 logs.go:123] Gathering logs for kube-controller-manager [caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8] ...
	I0528 21:34:06.018334 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8"
	I0528 21:34:06.104191 1355836 logs.go:123] Gathering logs for container status ...
	I0528 21:34:06.104226 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:34:06.155670 1355836 logs.go:123] Gathering logs for kubelet ...
	I0528 21:34:06.155702 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 21:34:06.194482 1355836 logs.go:138] Found kubelet problem: May 28 21:32:43 addons-504712 kubelet[1526]: W0528 21:32:43.750886    1526 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	W0528 21:34:06.194698 1355836 logs.go:138] Found kubelet problem: May 28 21:32:43 addons-504712 kubelet[1526]: E0528 21:32:43.750926    1526 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	I0528 21:34:06.242820 1355836 logs.go:123] Gathering logs for etcd [3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2] ...
	I0528 21:34:06.242856 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2"
	I0528 21:34:06.298617 1355836 logs.go:123] Gathering logs for coredns [0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15] ...
	I0528 21:34:06.298648 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15"
	I0528 21:34:06.340104 1355836 logs.go:123] Gathering logs for kube-scheduler [5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c] ...
	I0528 21:34:06.340134 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c"
	I0528 21:34:06.387329 1355836 logs.go:123] Gathering logs for kube-proxy [a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb] ...
	I0528 21:34:06.387357 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb"
	I0528 21:34:06.424482 1355836 logs.go:123] Gathering logs for kindnet [b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a] ...
	I0528 21:34:06.424524 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a"
	I0528 21:34:06.471224 1355836 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:34:06.471254 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:34:06.566443 1355836 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:34:06.566518 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:34:06.705996 1355836 out.go:304] Setting ErrFile to fd 2...
	I0528 21:34:06.706099 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 21:34:06.706185 1355836 out.go:239] X Problems detected in kubelet:
	W0528 21:34:06.706225 1355836 out.go:239]   May 28 21:32:43 addons-504712 kubelet[1526]: W0528 21:32:43.750886    1526 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	W0528 21:34:06.706256 1355836 out.go:239]   May 28 21:32:43 addons-504712 kubelet[1526]: E0528 21:32:43.750926    1526 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	I0528 21:34:06.706301 1355836 out.go:304] Setting ErrFile to fd 2...
	I0528 21:34:06.706320 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:34:16.707164 1355836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0528 21:34:16.714645 1355836 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0528 21:34:16.715609 1355836 api_server.go:141] control plane version: v1.30.1
	I0528 21:34:16.715636 1355836 api_server.go:131] duration metric: took 11.08553971s to wait for apiserver health ...
	I0528 21:34:16.715645 1355836 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:34:16.715665 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:34:16.715726 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:34:16.755795 1355836 cri.go:89] found id: "e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81"
	I0528 21:34:16.755817 1355836 cri.go:89] found id: ""
	I0528 21:34:16.755825 1355836 logs.go:276] 1 containers: [e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81]
	I0528 21:34:16.755901 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:16.759299 1355836 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:34:16.759369 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:34:16.801408 1355836 cri.go:89] found id: "3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2"
	I0528 21:34:16.801430 1355836 cri.go:89] found id: ""
	I0528 21:34:16.801438 1355836 logs.go:276] 1 containers: [3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2]
	I0528 21:34:16.801495 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:16.805484 1355836 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:34:16.805558 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:34:16.856090 1355836 cri.go:89] found id: "0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15"
	I0528 21:34:16.856113 1355836 cri.go:89] found id: ""
	I0528 21:34:16.856121 1355836 logs.go:276] 1 containers: [0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15]
	I0528 21:34:16.856181 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:16.859858 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:34:16.859931 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:34:16.898806 1355836 cri.go:89] found id: "5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c"
	I0528 21:34:16.898830 1355836 cri.go:89] found id: ""
	I0528 21:34:16.898837 1355836 logs.go:276] 1 containers: [5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c]
	I0528 21:34:16.898900 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:16.902333 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:34:16.902402 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:34:16.941755 1355836 cri.go:89] found id: "a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb"
	I0528 21:34:16.941779 1355836 cri.go:89] found id: ""
	I0528 21:34:16.941787 1355836 logs.go:276] 1 containers: [a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb]
	I0528 21:34:16.941839 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:16.945208 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:34:16.945280 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:34:16.983476 1355836 cri.go:89] found id: "caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8"
	I0528 21:34:16.983498 1355836 cri.go:89] found id: ""
	I0528 21:34:16.983506 1355836 logs.go:276] 1 containers: [caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8]
	I0528 21:34:16.983560 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:16.987701 1355836 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:34:16.987792 1355836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:34:17.030638 1355836 cri.go:89] found id: "b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a"
	I0528 21:34:17.030661 1355836 cri.go:89] found id: ""
	I0528 21:34:17.030668 1355836 logs.go:276] 1 containers: [b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a]
	I0528 21:34:17.030754 1355836 ssh_runner.go:195] Run: which crictl
	I0528 21:34:17.034223 1355836 logs.go:123] Gathering logs for kubelet ...
	I0528 21:34:17.034246 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 21:34:17.073050 1355836 logs.go:138] Found kubelet problem: May 28 21:32:43 addons-504712 kubelet[1526]: W0528 21:32:43.750886    1526 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	W0528 21:34:17.073261 1355836 logs.go:138] Found kubelet problem: May 28 21:32:43 addons-504712 kubelet[1526]: E0528 21:32:43.750926    1526 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	I0528 21:34:17.123432 1355836 logs.go:123] Gathering logs for dmesg ...
	I0528 21:34:17.123471 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:34:17.142282 1355836 logs.go:123] Gathering logs for etcd [3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2] ...
	I0528 21:34:17.142310 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2"
	I0528 21:34:17.191754 1355836 logs.go:123] Gathering logs for kube-scheduler [5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c] ...
	I0528 21:34:17.191786 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c"
	I0528 21:34:17.236288 1355836 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:34:17.236318 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:34:17.332274 1355836 logs.go:123] Gathering logs for container status ...
	I0528 21:34:17.332312 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:34:17.379855 1355836 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:34:17.379889 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:34:17.516502 1355836 logs.go:123] Gathering logs for kube-apiserver [e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81] ...
	I0528 21:34:17.516529 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81"
	I0528 21:34:17.585125 1355836 logs.go:123] Gathering logs for coredns [0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15] ...
	I0528 21:34:17.585156 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15"
	I0528 21:34:17.628067 1355836 logs.go:123] Gathering logs for kube-proxy [a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb] ...
	I0528 21:34:17.628096 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb"
	I0528 21:34:17.667467 1355836 logs.go:123] Gathering logs for kube-controller-manager [caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8] ...
	I0528 21:34:17.667496 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8"
	I0528 21:34:17.755784 1355836 logs.go:123] Gathering logs for kindnet [b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a] ...
	I0528 21:34:17.755820 1355836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a"
	I0528 21:34:17.798516 1355836 out.go:304] Setting ErrFile to fd 2...
	I0528 21:34:17.798541 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 21:34:17.798596 1355836 out.go:239] X Problems detected in kubelet:
	W0528 21:34:17.798607 1355836 out.go:239]   May 28 21:32:43 addons-504712 kubelet[1526]: W0528 21:32:43.750886    1526 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	W0528 21:34:17.798614 1355836 out.go:239]   May 28 21:32:43 addons-504712 kubelet[1526]: E0528 21:32:43.750926    1526 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-504712" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-504712' and this object
	I0528 21:34:17.798627 1355836 out.go:304] Setting ErrFile to fd 2...
	I0528 21:34:17.798635 1355836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:34:27.809248 1355836 system_pods.go:59] 18 kube-system pods found
	I0528 21:34:27.809288 1355836 system_pods.go:61] "coredns-7db6d8ff4d-5qs7n" [123e6e9f-938c-4637-9df3-48445a053447] Running
	I0528 21:34:27.809294 1355836 system_pods.go:61] "csi-hostpath-attacher-0" [03affddc-37af-40ff-91d0-201caebcf9d4] Running
	I0528 21:34:27.809298 1355836 system_pods.go:61] "csi-hostpath-resizer-0" [9a45d0fd-aba5-4709-b9cd-9bdc1a3ae6d2] Running
	I0528 21:34:27.809302 1355836 system_pods.go:61] "csi-hostpathplugin-whvsm" [a24550b9-f416-492b-aec0-fb3a0247163d] Running
	I0528 21:34:27.809306 1355836 system_pods.go:61] "etcd-addons-504712" [14693a74-3f6d-434a-9659-b8117a7f4cfe] Running
	I0528 21:34:27.809311 1355836 system_pods.go:61] "kindnet-h8d66" [a1157f9e-ea43-46f3-bc60-a3f92737ea52] Running
	I0528 21:34:27.809316 1355836 system_pods.go:61] "kube-apiserver-addons-504712" [f56fa365-330a-4949-acba-efa405848af8] Running
	I0528 21:34:27.809320 1355836 system_pods.go:61] "kube-controller-manager-addons-504712" [8f1a9ee6-4cbc-4605-9900-707045250fa5] Running
	I0528 21:34:27.809328 1355836 system_pods.go:61] "kube-ingress-dns-minikube" [f992c4bf-c862-45ab-bbb9-bc45aa22a765] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0528 21:34:27.809333 1355836 system_pods.go:61] "kube-proxy-kdmkz" [6d9390b9-56ba-40e3-80d9-68427b904453] Running
	I0528 21:34:27.809345 1355836 system_pods.go:61] "kube-scheduler-addons-504712" [fd2276ce-151c-41df-8b1c-a8ec138481f9] Running
	I0528 21:34:27.809350 1355836 system_pods.go:61] "metrics-server-c59844bb4-99j6d" [6c20ca2e-5167-4501-8529-d317230ce330] Running
	I0528 21:34:27.809354 1355836 system_pods.go:61] "nvidia-device-plugin-daemonset-p6z9d" [5a8692ef-b68d-4ec3-a15c-1c8c61eff11e] Running
	I0528 21:34:27.809361 1355836 system_pods.go:61] "registry-gjvvs" [769902e5-f85c-4a07-b2c6-d37f1fb19841] Running
	I0528 21:34:27.809364 1355836 system_pods.go:61] "registry-proxy-8zzlh" [acd09f12-58ca-45ba-a43a-ccae6df2d939] Running
	I0528 21:34:27.809367 1355836 system_pods.go:61] "snapshot-controller-745499f584-tqm7g" [759bb548-b796-41f3-a876-a454c4679056] Running
	I0528 21:34:27.809372 1355836 system_pods.go:61] "snapshot-controller-745499f584-w8hf9" [1f31c5a1-57e5-4832-832a-97cf98ffbf32] Running
	I0528 21:34:27.809376 1355836 system_pods.go:61] "storage-provisioner" [120e8c42-5a1a-459e-acae-21a1a864b05d] Running
	I0528 21:34:27.809384 1355836 system_pods.go:74] duration metric: took 11.093731387s to wait for pod list to return data ...
	I0528 21:34:27.809398 1355836 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:34:27.811921 1355836 default_sa.go:45] found service account: "default"
	I0528 21:34:27.811946 1355836 default_sa.go:55] duration metric: took 2.542131ms for default service account to be created ...
	I0528 21:34:27.811955 1355836 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:34:27.822813 1355836 system_pods.go:86] 18 kube-system pods found
	I0528 21:34:27.822851 1355836 system_pods.go:89] "coredns-7db6d8ff4d-5qs7n" [123e6e9f-938c-4637-9df3-48445a053447] Running
	I0528 21:34:27.822859 1355836 system_pods.go:89] "csi-hostpath-attacher-0" [03affddc-37af-40ff-91d0-201caebcf9d4] Running
	I0528 21:34:27.822867 1355836 system_pods.go:89] "csi-hostpath-resizer-0" [9a45d0fd-aba5-4709-b9cd-9bdc1a3ae6d2] Running
	I0528 21:34:27.822871 1355836 system_pods.go:89] "csi-hostpathplugin-whvsm" [a24550b9-f416-492b-aec0-fb3a0247163d] Running
	I0528 21:34:27.822875 1355836 system_pods.go:89] "etcd-addons-504712" [14693a74-3f6d-434a-9659-b8117a7f4cfe] Running
	I0528 21:34:27.822880 1355836 system_pods.go:89] "kindnet-h8d66" [a1157f9e-ea43-46f3-bc60-a3f92737ea52] Running
	I0528 21:34:27.822884 1355836 system_pods.go:89] "kube-apiserver-addons-504712" [f56fa365-330a-4949-acba-efa405848af8] Running
	I0528 21:34:27.822889 1355836 system_pods.go:89] "kube-controller-manager-addons-504712" [8f1a9ee6-4cbc-4605-9900-707045250fa5] Running
	I0528 21:34:27.822899 1355836 system_pods.go:89] "kube-ingress-dns-minikube" [f992c4bf-c862-45ab-bbb9-bc45aa22a765] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0528 21:34:27.822904 1355836 system_pods.go:89] "kube-proxy-kdmkz" [6d9390b9-56ba-40e3-80d9-68427b904453] Running
	I0528 21:34:27.822910 1355836 system_pods.go:89] "kube-scheduler-addons-504712" [fd2276ce-151c-41df-8b1c-a8ec138481f9] Running
	I0528 21:34:27.822914 1355836 system_pods.go:89] "metrics-server-c59844bb4-99j6d" [6c20ca2e-5167-4501-8529-d317230ce330] Running
	I0528 21:34:27.822918 1355836 system_pods.go:89] "nvidia-device-plugin-daemonset-p6z9d" [5a8692ef-b68d-4ec3-a15c-1c8c61eff11e] Running
	I0528 21:34:27.822923 1355836 system_pods.go:89] "registry-gjvvs" [769902e5-f85c-4a07-b2c6-d37f1fb19841] Running
	I0528 21:34:27.822926 1355836 system_pods.go:89] "registry-proxy-8zzlh" [acd09f12-58ca-45ba-a43a-ccae6df2d939] Running
	I0528 21:34:27.822930 1355836 system_pods.go:89] "snapshot-controller-745499f584-tqm7g" [759bb548-b796-41f3-a876-a454c4679056] Running
	I0528 21:34:27.822935 1355836 system_pods.go:89] "snapshot-controller-745499f584-w8hf9" [1f31c5a1-57e5-4832-832a-97cf98ffbf32] Running
	I0528 21:34:27.822946 1355836 system_pods.go:89] "storage-provisioner" [120e8c42-5a1a-459e-acae-21a1a864b05d] Running
	I0528 21:34:27.822954 1355836 system_pods.go:126] duration metric: took 10.993511ms to wait for k8s-apps to be running ...
	I0528 21:34:27.822963 1355836 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:34:27.823044 1355836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:34:27.838495 1355836 system_svc.go:56] duration metric: took 15.521939ms WaitForService to wait for kubelet
	I0528 21:34:27.838522 1355836 kubeadm.go:576] duration metric: took 2m19.702265644s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:34:27.838541 1355836 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:34:27.841994 1355836 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0528 21:34:27.842065 1355836 node_conditions.go:123] node cpu capacity is 2
	I0528 21:34:27.842080 1355836 node_conditions.go:105] duration metric: took 3.532974ms to run NodePressure ...
	I0528 21:34:27.842095 1355836 start.go:240] waiting for startup goroutines ...
	I0528 21:34:27.842105 1355836 start.go:245] waiting for cluster config update ...
	I0528 21:34:27.842122 1355836 start.go:254] writing updated cluster config ...
	I0528 21:34:27.842404 1355836 ssh_runner.go:195] Run: rm -f paused
	I0528 21:34:28.112018 1355836 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:34:28.115564 1355836 out.go:177] * Done! kubectl is now configured to use "addons-504712" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 28 21:39:31 addons-504712 crio[917]: time="2024-05-28 21:39:31.978822614Z" level=info msg="Removed container 145f770c334880c26e4f2884824b01817c35af19f44646fe59aa13d5f5c40b3e: default/hello-world-app-86c47465fc-bp6tp/hello-world-app" id=e941d236-8c5c-40e0-97e5-2d4a51762926 name=/runtime.v1.RuntimeService/RemoveContainer
	May 28 21:39:37 addons-504712 crio[917]: time="2024-05-28 21:39:37.186413325Z" level=info msg="Stopping container: 75672157ce1c53825d669692aaed0bfc277460c6786307880af392b9618d62c4 (timeout: 30s)" id=6819865b-9c95-4196-8bf4-1bd84e6764d4 name=/runtime.v1.RuntimeService/StopContainer
	May 28 21:39:37 addons-504712 conmon[3613]: conmon 75672157ce1c53825d66 <ninfo>: container 3624 exited with status 2
	May 28 21:39:37 addons-504712 crio[917]: time="2024-05-28 21:39:37.330544174Z" level=info msg="Stopped container 75672157ce1c53825d669692aaed0bfc277460c6786307880af392b9618d62c4: default/cloud-spanner-emulator-6fcd4f6f98-4ttqc/cloud-spanner-emulator" id=6819865b-9c95-4196-8bf4-1bd84e6764d4 name=/runtime.v1.RuntimeService/StopContainer
	May 28 21:39:37 addons-504712 crio[917]: time="2024-05-28 21:39:37.331189967Z" level=info msg="Stopping pod sandbox: a4f29b6489013bdf099b82103c75fbe3fe0e3900d1e23f23946a09b7ad002960" id=72d38030-3838-45e0-bba0-8d0f9f614796 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 28 21:39:37 addons-504712 crio[917]: time="2024-05-28 21:39:37.331404082Z" level=info msg="Got pod network &{Name:cloud-spanner-emulator-6fcd4f6f98-4ttqc Namespace:default ID:a4f29b6489013bdf099b82103c75fbe3fe0e3900d1e23f23946a09b7ad002960 UID:d4e68c8f-c48a-45a1-bc99-c73899fd888f NetNS:/var/run/netns/1b01c44d-96ba-4f0d-9f48-6fda8d5d070c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 28 21:39:37 addons-504712 crio[917]: time="2024-05-28 21:39:37.331548866Z" level=info msg="Deleting pod default_cloud-spanner-emulator-6fcd4f6f98-4ttqc from CNI network \"kindnet\" (type=ptp)"
	May 28 21:39:37 addons-504712 crio[917]: time="2024-05-28 21:39:37.350640184Z" level=info msg="Stopped pod sandbox: a4f29b6489013bdf099b82103c75fbe3fe0e3900d1e23f23946a09b7ad002960" id=72d38030-3838-45e0-bba0-8d0f9f614796 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 28 21:39:37 addons-504712 crio[917]: time="2024-05-28 21:39:37.965882292Z" level=info msg="Removing container: 75672157ce1c53825d669692aaed0bfc277460c6786307880af392b9618d62c4" id=53fcc268-55fc-43e4-9cf9-5077056bbd8d name=/runtime.v1.RuntimeService/RemoveContainer
	May 28 21:39:37 addons-504712 crio[917]: time="2024-05-28 21:39:37.984714786Z" level=info msg="Removed container 75672157ce1c53825d669692aaed0bfc277460c6786307880af392b9618d62c4: default/cloud-spanner-emulator-6fcd4f6f98-4ttqc/cloud-spanner-emulator" id=53fcc268-55fc-43e4-9cf9-5077056bbd8d name=/runtime.v1.RuntimeService/RemoveContainer
	May 28 21:39:54 addons-504712 crio[917]: time="2024-05-28 21:39:54.404031924Z" level=info msg="Stopping pod sandbox: a4f29b6489013bdf099b82103c75fbe3fe0e3900d1e23f23946a09b7ad002960" id=b72358f0-9594-4b4c-b4df-0adf917c2603 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 28 21:39:54 addons-504712 crio[917]: time="2024-05-28 21:39:54.404079734Z" level=info msg="Stopped pod sandbox (already stopped): a4f29b6489013bdf099b82103c75fbe3fe0e3900d1e23f23946a09b7ad002960" id=b72358f0-9594-4b4c-b4df-0adf917c2603 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 28 21:39:54 addons-504712 crio[917]: time="2024-05-28 21:39:54.404632164Z" level=info msg="Removing pod sandbox: a4f29b6489013bdf099b82103c75fbe3fe0e3900d1e23f23946a09b7ad002960" id=d8d32505-ce20-4de3-b076-03a90175c497 name=/runtime.v1.RuntimeService/RemovePodSandbox
	May 28 21:39:54 addons-504712 crio[917]: time="2024-05-28 21:39:54.414254764Z" level=info msg="Removed pod sandbox: a4f29b6489013bdf099b82103c75fbe3fe0e3900d1e23f23946a09b7ad002960" id=d8d32505-ce20-4de3-b076-03a90175c497 name=/runtime.v1.RuntimeService/RemovePodSandbox
	May 28 21:39:54 addons-504712 crio[917]: time="2024-05-28 21:39:54.414824834Z" level=info msg="Stopping pod sandbox: 81a90bb31ceeaa585dc96a6f81c75e5d42cc7f4986682d9b6ddecd03317fce55" id=a47a6180-c06d-4bf8-96ea-9054ead12be4 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 28 21:39:54 addons-504712 crio[917]: time="2024-05-28 21:39:54.414859163Z" level=info msg="Stopped pod sandbox (already stopped): 81a90bb31ceeaa585dc96a6f81c75e5d42cc7f4986682d9b6ddecd03317fce55" id=a47a6180-c06d-4bf8-96ea-9054ead12be4 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 28 21:39:54 addons-504712 crio[917]: time="2024-05-28 21:39:54.415392032Z" level=info msg="Removing pod sandbox: 81a90bb31ceeaa585dc96a6f81c75e5d42cc7f4986682d9b6ddecd03317fce55" id=1cd4339f-2049-4b0d-bab4-c24e798ae5ca name=/runtime.v1.RuntimeService/RemovePodSandbox
	May 28 21:39:54 addons-504712 crio[917]: time="2024-05-28 21:39:54.425371653Z" level=info msg="Removed pod sandbox: 81a90bb31ceeaa585dc96a6f81c75e5d42cc7f4986682d9b6ddecd03317fce55" id=1cd4339f-2049-4b0d-bab4-c24e798ae5ca name=/runtime.v1.RuntimeService/RemovePodSandbox
	May 28 21:40:47 addons-504712 crio[917]: time="2024-05-28 21:40:47.875534481Z" level=info msg="Stopping container: 6b48727f5c2722b0ccf2577211288a29d980321a1d3c1ca3c85be4884d0ad38c (timeout: 30s)" id=c5da479e-f232-4a21-b400-1b535816621b name=/runtime.v1.RuntimeService/StopContainer
	May 28 21:40:49 addons-504712 crio[917]: time="2024-05-28 21:40:49.053351129Z" level=info msg="Stopped container 6b48727f5c2722b0ccf2577211288a29d980321a1d3c1ca3c85be4884d0ad38c: kube-system/metrics-server-c59844bb4-99j6d/metrics-server" id=c5da479e-f232-4a21-b400-1b535816621b name=/runtime.v1.RuntimeService/StopContainer
	May 28 21:40:49 addons-504712 crio[917]: time="2024-05-28 21:40:49.054476417Z" level=info msg="Stopping pod sandbox: 20dabd9b8a8193e5ebd46231354f2bb7cc4ecd4b0de750c5a65f5c676c64dc21" id=aa4df00b-9165-46bd-8d20-6294979e4b46 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 28 21:40:49 addons-504712 crio[917]: time="2024-05-28 21:40:49.054697089Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-99j6d Namespace:kube-system ID:20dabd9b8a8193e5ebd46231354f2bb7cc4ecd4b0de750c5a65f5c676c64dc21 UID:6c20ca2e-5167-4501-8529-d317230ce330 NetNS:/var/run/netns/4b6f5b69-afa1-47c1-b925-dd7801831845 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 28 21:40:49 addons-504712 crio[917]: time="2024-05-28 21:40:49.054838066Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-99j6d from CNI network \"kindnet\" (type=ptp)"
	May 28 21:40:49 addons-504712 crio[917]: time="2024-05-28 21:40:49.093461634Z" level=info msg="Stopped pod sandbox: 20dabd9b8a8193e5ebd46231354f2bb7cc4ecd4b0de750c5a65f5c676c64dc21" id=aa4df00b-9165-46bd-8d20-6294979e4b46 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 28 21:40:49 addons-504712 crio[917]: time="2024-05-28 21:40:49.126187521Z" level=info msg="Removing container: 6b48727f5c2722b0ccf2577211288a29d980321a1d3c1ca3c85be4884d0ad38c" id=f24ab7dd-b674-48fb-abc4-66727c1ff811 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                          CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8df0aa93e513a       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                               About a minute ago   Exited              hello-world-app           4                   a7f164109a5ff       hello-world-app-86c47465fc-bp6tp
	3cc4d4ec5577d       docker.io/library/nginx@sha256:05325b3a32db871dc396a859d9a9609d75f50d2f7ad12194f9f3a550111bdcaa                5 minutes ago        Running             nginx                     0                   611346af517ff       nginx
	cdfcfe22abb7a       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474          6 minutes ago        Running             headlamp                  0                   7cd2a5bfff847       headlamp-68456f997b-48ktv
	09f6b46f6af63       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69   6 minutes ago        Running             gcp-auth                  0                   3c6fbe6471648       gcp-auth-5db96cd9b4-ncljx
	a975aae5c423e       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                7 minutes ago        Running             yakd                      0                   dc706073136c7       yakd-dashboard-5ddbf7d777-bjx67
	0419ea7eeb7f5       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                               8 minutes ago        Running             coredns                   0                   112090cbd8803       coredns-7db6d8ff4d-5qs7n
	2c6bd74546fd1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                               8 minutes ago        Running             storage-provisioner       0                   06da4576b9735       storage-provisioner
	b2eb2156bfd52       docker.io/kindest/kindnetd@sha256:1770ac17c925dfef54061d598c65310ff99269a3a77d5c7257f04366b38c64be             8 minutes ago        Running             kindnet-cni               0                   a114c0ce60b1b       kindnet-h8d66
	a291f575a32c1       05eccb821e159092de1ef74c21cd16c403e1cda549ba56687391f50f087310ee                                               8 minutes ago        Running             kube-proxy                0                   db59d148453f0       kube-proxy-kdmkz
	5563c029288da       163ff818d154da230c6c8a6211a0d939689ba04df75dba2c3742fe740ffdf44a                                               9 minutes ago        Running             kube-scheduler            0                   ef1ffb33d4809       kube-scheduler-addons-504712
	e57173f763b7b       988b55d423baf54b1515b7560890c84d1822a8dfbdfcecfb2576f8cb5c2b28ee                                               9 minutes ago        Running             kube-apiserver            0                   6085e4eb2e9f2       kube-apiserver-addons-504712
	caef4ef03b389       234ac56e455bef8ae70f120f49fcf5daea5c4171c5ffbf8f9e791a516fef99b4                                               9 minutes ago        Running             kube-controller-manager   0                   2278b3f166154       kube-controller-manager-addons-504712
	3d73169449d92       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                               9 minutes ago        Running             etcd                      0                   762ef4fbfbaf3       etcd-addons-504712
	
	
	==> coredns [0419ea7eeb7f5e9af1318539235cbd1d08460e062a944a05eb09d7ba8956dc15] <==
	[INFO] 10.244.0.19:40913 - 13612 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057976s
	[INFO] 10.244.0.19:40913 - 46902 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060429s
	[INFO] 10.244.0.19:40913 - 23347 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069947s
	[INFO] 10.244.0.19:40913 - 59494 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060199s
	[INFO] 10.244.0.19:40913 - 65236 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001276752s
	[INFO] 10.244.0.19:40913 - 62970 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001035321s
	[INFO] 10.244.0.19:40913 - 6759 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072696s
	[INFO] 10.244.0.19:49010 - 6381 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000098147s
	[INFO] 10.244.0.19:49010 - 37352 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065886s
	[INFO] 10.244.0.19:36568 - 8055 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033796s
	[INFO] 10.244.0.19:49010 - 59027 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049492s
	[INFO] 10.244.0.19:36568 - 51570 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000145555s
	[INFO] 10.244.0.19:49010 - 59591 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004205s
	[INFO] 10.244.0.19:49010 - 23738 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000084076s
	[INFO] 10.244.0.19:49010 - 23654 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057951s
	[INFO] 10.244.0.19:36568 - 23252 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000227695s
	[INFO] 10.244.0.19:36568 - 6346 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000065623s
	[INFO] 10.244.0.19:36568 - 59344 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000071423s
	[INFO] 10.244.0.19:36568 - 62212 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064532s
	[INFO] 10.244.0.19:49010 - 42440 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001129276s
	[INFO] 10.244.0.19:36568 - 30844 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001358915s
	[INFO] 10.244.0.19:49010 - 37643 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001042115s
	[INFO] 10.244.0.19:49010 - 48920 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075699s
	[INFO] 10.244.0.19:36568 - 63754 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002150543s
	[INFO] 10.244.0.19:36568 - 16500 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000105261s
	
	
	==> describe nodes <==
	Name:               addons-504712
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-504712
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=addons-504712
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T21_31_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-504712
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:31:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-504712
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:40:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:39:03 +0000   Tue, 28 May 2024 21:31:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:39:03 +0000   Tue, 28 May 2024 21:31:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:39:03 +0000   Tue, 28 May 2024 21:31:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:39:03 +0000   Tue, 28 May 2024 21:32:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-504712
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022436Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022436Ki
	  pods:               110
	System Info:
	  Machine ID:                 366a2bf59c69436784fd8c89b1f0bc70
	  System UUID:                4d1c9ec3-31a0-4142-82b5-ef27d22f688d
	  Boot ID:                    2882d43f-5a85-456c-aec3-876199af1cc0
	  Kernel Version:             5.15.0-1062-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-bp6tp         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  gcp-auth                    gcp-auth-5db96cd9b4-ncljx                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  headlamp                    headlamp-68456f997b-48ktv                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 coredns-7db6d8ff4d-5qs7n                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m41s
	  kube-system                 etcd-addons-504712                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m55s
	  kube-system                 kindnet-h8d66                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m42s
	  kube-system                 kube-apiserver-addons-504712             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m55s
	  kube-system                 kube-controller-manager-addons-504712    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m55s
	  kube-system                 kube-proxy-kdmkz                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m42s
	  kube-system                 kube-scheduler-addons-504712             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m55s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m36s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-bjx67          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     8m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m35s                  kube-proxy       
	  Normal  Starting                 8m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m56s (x2 over 8m56s)  kubelet          Node addons-504712 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m56s (x2 over 8m56s)  kubelet          Node addons-504712 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m56s (x2 over 8m56s)  kubelet          Node addons-504712 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m42s                  node-controller  Node addons-504712 event: Registered Node addons-504712 in Controller
	  Normal  NodeReady                8m6s                   kubelet          Node addons-504712 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001076] FS-Cache: O-key=[8] '11d6c90000000000'
	[  +0.000728] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000993] FS-Cache: N-cookie d=000000000847e55d{9p.inode} n=00000000559aec3f
	[  +0.001096] FS-Cache: N-key=[8] '11d6c90000000000'
	[  +0.002722] FS-Cache: Duplicate cookie detected
	[  +0.000859] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=000000000847e55d{9p.inode} n=00000000e8e33e92
	[  +0.001094] FS-Cache: O-key=[8] '11d6c90000000000'
	[  +0.000747] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001030] FS-Cache: N-cookie d=000000000847e55d{9p.inode} n=000000000318bcd7
	[  +0.001118] FS-Cache: N-key=[8] '11d6c90000000000'
	[  +2.671416] FS-Cache: Duplicate cookie detected
	[  +0.000732] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000998] FS-Cache: O-cookie d=000000000847e55d{9p.inode} n=000000003d37d697
	[  +0.001081] FS-Cache: O-key=[8] '10d6c90000000000'
	[  +0.000796] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=000000000847e55d{9p.inode} n=00000000559aec3f
	[  +0.001092] FS-Cache: N-key=[8] '10d6c90000000000'
	[  +0.273823] FS-Cache: Duplicate cookie detected
	[  +0.000699] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001041] FS-Cache: O-cookie d=000000000847e55d{9p.inode} n=00000000bc6509f8
	[  +0.001083] FS-Cache: O-key=[8] '16d6c90000000000'
	[  +0.000760] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001001] FS-Cache: N-cookie d=000000000847e55d{9p.inode} n=00000000bc4002c6
	[  +0.001077] FS-Cache: N-key=[8] '16d6c90000000000'
	
	
	==> etcd [3d73169449d92fc6056bb23a131151a8449d76244251a2256cbd76b8dbf93ba2] <==
	{"level":"info","ts":"2024-05-28T21:31:47.512143Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-28T21:31:47.512218Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-28T21:31:47.512376Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-28T21:31:47.512424Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-28T21:31:48.174077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-28T21:31:48.174196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-28T21:31:48.174248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-05-28T21:31:48.174289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-05-28T21:31:48.174319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-28T21:31:48.174355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-05-28T21:31:48.17439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-28T21:31:48.178156Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:31:48.182253Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-504712 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T21:31:48.18244Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:31:48.182536Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:31:48.182624Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:31:48.182667Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:31:48.182715Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:31:48.189933Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T21:31:48.190053Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T21:31:48.194548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-28T21:31:48.221021Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-05-28T21:32:08.774499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.075253ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128029487088505457 > lease_revoke:<id:70cc8fc11dcf90f0>","response":"size:29"}
	{"level":"info","ts":"2024-05-28T21:32:13.39659Z","caller":"traceutil/trace.go:171","msg":"trace[1632236740] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"124.438403ms","start":"2024-05-28T21:32:13.272135Z","end":"2024-05-28T21:32:13.396573Z","steps":["trace[1632236740] 'process raft request'  (duration: 99.187766ms)","trace[1632236740] 'compare'  (duration: 24.722913ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T21:32:13.397Z","caller":"traceutil/trace.go:171","msg":"trace[1809514941] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"103.038279ms","start":"2024-05-28T21:32:13.293949Z","end":"2024-05-28T21:32:13.396987Z","steps":["trace[1809514941] 'process raft request'  (duration: 102.180971ms)"],"step_count":1}
	
	
	==> gcp-auth [09f6b46f6af630a6eb46070640076f1cc2f04ac10bc9e1fab9b522a482ce6d55] <==
	2024/05/28 21:33:56 GCP Auth Webhook started!
	2024/05/28 21:34:28 Ready to marshal response ...
	2024/05/28 21:34:28 Ready to write response ...
	2024/05/28 21:34:29 Ready to marshal response ...
	2024/05/28 21:34:29 Ready to write response ...
	2024/05/28 21:34:29 Ready to marshal response ...
	2024/05/28 21:34:29 Ready to write response ...
	2024/05/28 21:34:39 Ready to marshal response ...
	2024/05/28 21:34:39 Ready to write response ...
	2024/05/28 21:34:43 Ready to marshal response ...
	2024/05/28 21:34:43 Ready to write response ...
	2024/05/28 21:35:11 Ready to marshal response ...
	2024/05/28 21:35:11 Ready to write response ...
	2024/05/28 21:35:39 Ready to marshal response ...
	2024/05/28 21:35:39 Ready to write response ...
	2024/05/28 21:37:57 Ready to marshal response ...
	2024/05/28 21:37:57 Ready to write response ...
	2024/05/28 21:38:37 Ready to marshal response ...
	2024/05/28 21:38:37 Ready to write response ...
	2024/05/28 21:38:37 Ready to marshal response ...
	2024/05/28 21:38:37 Ready to write response ...
	2024/05/28 21:38:47 Ready to marshal response ...
	2024/05/28 21:38:47 Ready to write response ...
	
	
	==> kernel <==
	 21:40:49 up  5:23,  0 users,  load average: 0.10, 0.47, 1.34
	Linux addons-504712 5.15.0-1062-aws #68~20.04.1-Ubuntu SMP Tue May 7 11:50:35 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b2eb2156bfd5205ba85594138a11a170511457e457c0a8abb6430df4a59fd03a] <==
	I0528 21:38:43.858805       1 main.go:227] handling current node
	I0528 21:38:53.870942       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:38:53.870971       1 main.go:227] handling current node
	I0528 21:39:03.881755       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:39:03.881786       1 main.go:227] handling current node
	I0528 21:39:13.885657       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:39:13.885688       1 main.go:227] handling current node
	I0528 21:39:23.889919       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:39:23.889947       1 main.go:227] handling current node
	I0528 21:39:33.902715       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:39:33.902743       1 main.go:227] handling current node
	I0528 21:39:43.915253       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:39:43.915278       1 main.go:227] handling current node
	I0528 21:39:53.919634       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:39:53.919665       1 main.go:227] handling current node
	I0528 21:40:03.930694       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:40:03.930726       1 main.go:227] handling current node
	I0528 21:40:13.934658       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:40:13.934689       1 main.go:227] handling current node
	I0528 21:40:23.947302       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:40:23.947343       1 main.go:227] handling current node
	I0528 21:40:33.951532       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:40:33.951668       1 main.go:227] handling current node
	I0528 21:40:43.958767       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0528 21:40:43.958794       1 main.go:227] handling current node
	
	
	==> kube-apiserver [e57173f763b7b3678937a92f333d2c7050ac69158d4cffb2c45e5e1646879d81] <==
	E0528 21:33:54.158694       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.253.229:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.253.229:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.253.229:443: connect: connection refused
	E0528 21:33:54.179599       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.253.229:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.253.229:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.253.229:443: connect: connection refused
	I0528 21:33:54.406706       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0528 21:34:29.010752       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.170.46"}
	I0528 21:34:55.673159       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0528 21:35:27.108335       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 21:35:27.108424       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0528 21:35:27.148797       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 21:35:27.148841       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0528 21:35:27.157958       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 21:35:27.158048       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0528 21:35:27.204369       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 21:35:27.204458       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0528 21:35:28.149175       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0528 21:35:28.205215       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0528 21:35:28.211369       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0528 21:35:33.877725       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0528 21:35:34.907804       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0528 21:35:39.434992       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0528 21:35:39.724732       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.220.114"}
	I0528 21:37:58.091722       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.73.5"}
	E0528 21:38:48.469724       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0528 21:38:48.480707       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0528 21:38:48.491803       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0528 21:39:03.491425       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [caef4ef03b3899c5e1426eca9d4413298905eff24c5a4c6570a665684faa96e8] <==
	W0528 21:39:06.750840       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:39:06.750877       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 21:39:22.484771       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:39:22.484898       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 21:39:31.967028       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="72.999µs"
	W0528 21:39:33.899192       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:39:33.899311       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 21:39:35.644204       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0528 21:39:37.172991       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-6fcd4f6f98" duration="11.93µs"
	W0528 21:39:39.135835       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:39:39.135873       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 21:39:39.493903       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:39:39.493941       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 21:39:46.938611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="47.228µs"
	W0528 21:40:00.743242       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:40:00.743367       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 21:40:18.666421       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:40:18.666540       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 21:40:22.699977       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:40:22.700015       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 21:40:24.660876       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:40:24.660913       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 21:40:36.048318       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:40:36.048356       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 21:40:47.845497       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="5.358µs"
	
	
	==> kube-proxy [a291f575a32c1021560227dbbefd0ec1674ad9cf85886fca164c125117fc93fb] <==
	I0528 21:32:12.913644       1 server_linux.go:69] "Using iptables proxy"
	I0528 21:32:13.330840       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0528 21:32:13.787059       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0528 21:32:13.787201       1 server_linux.go:165] "Using iptables Proxier"
	I0528 21:32:13.797610       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0528 21:32:13.797715       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0528 21:32:13.797765       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 21:32:13.798374       1 server.go:872] "Version info" version="v1.30.1"
	I0528 21:32:13.798442       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:32:13.799437       1 config.go:192] "Starting service config controller"
	I0528 21:32:13.802375       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 21:32:13.802484       1 config.go:101] "Starting endpoint slice config controller"
	I0528 21:32:13.802862       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 21:32:13.803446       1 config.go:319] "Starting node config controller"
	I0528 21:32:13.803500       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 21:32:13.904077       1 shared_informer.go:320] Caches are synced for node config
	I0528 21:32:13.927962       1 shared_informer.go:320] Caches are synced for service config
	I0528 21:32:13.946162       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5563c029288daed82eae34d9a1fbb982331e9bef8fc983be971a2c5455fe538c] <==
	W0528 21:31:51.843315       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 21:31:51.847478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0528 21:31:51.843350       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 21:31:51.847516       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0528 21:31:51.843382       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 21:31:51.847537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 21:31:51.844048       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 21:31:51.847562       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:31:51.847010       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 21:31:51.847583       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 21:31:51.847078       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0528 21:31:51.847597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0528 21:31:51.847135       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0528 21:31:51.847611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0528 21:31:51.847187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 21:31:51.847635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0528 21:31:51.847247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 21:31:51.847658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0528 21:31:51.847297       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 21:31:51.847671       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0528 21:31:52.692492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0528 21:31:52.692532       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0528 21:31:52.736567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 21:31:52.736677       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0528 21:31:53.428967       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 21:39:37 addons-504712 kubelet[1526]: I0528 21:39:37.964905    1526 scope.go:117] "RemoveContainer" containerID="75672157ce1c53825d669692aaed0bfc277460c6786307880af392b9618d62c4"
	May 28 21:39:37 addons-504712 kubelet[1526]: I0528 21:39:37.984958    1526 scope.go:117] "RemoveContainer" containerID="75672157ce1c53825d669692aaed0bfc277460c6786307880af392b9618d62c4"
	May 28 21:39:37 addons-504712 kubelet[1526]: E0528 21:39:37.985425    1526 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75672157ce1c53825d669692aaed0bfc277460c6786307880af392b9618d62c4\": container with ID starting with 75672157ce1c53825d669692aaed0bfc277460c6786307880af392b9618d62c4 not found: ID does not exist" containerID="75672157ce1c53825d669692aaed0bfc277460c6786307880af392b9618d62c4"
	May 28 21:39:37 addons-504712 kubelet[1526]: I0528 21:39:37.985554    1526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75672157ce1c53825d669692aaed0bfc277460c6786307880af392b9618d62c4"} err="failed to get container status \"75672157ce1c53825d669692aaed0bfc277460c6786307880af392b9618d62c4\": rpc error: code = NotFound desc = could not find container \"75672157ce1c53825d669692aaed0bfc277460c6786307880af392b9618d62c4\": container with ID starting with 75672157ce1c53825d669692aaed0bfc277460c6786307880af392b9618d62c4 not found: ID does not exist"
	May 28 21:39:39 addons-504712 kubelet[1526]: I0528 21:39:39.928864    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e68c8f-c48a-45a1-bc99-c73899fd888f" path="/var/lib/kubelet/pods/d4e68c8f-c48a-45a1-bc99-c73899fd888f/volumes"
	May 28 21:39:46 addons-504712 kubelet[1526]: I0528 21:39:46.927406    1526 scope.go:117] "RemoveContainer" containerID="8df0aa93e513af9c0bcf3406c35e08397ac112010dde58eb7607313986c8671d"
	May 28 21:39:46 addons-504712 kubelet[1526]: E0528 21:39:46.927735    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-bp6tp_default(a28bb037-057e-4eeb-9aa1-aea54533e1ee)\"" pod="default/hello-world-app-86c47465fc-bp6tp" podUID="a28bb037-057e-4eeb-9aa1-aea54533e1ee"
	May 28 21:39:59 addons-504712 kubelet[1526]: I0528 21:39:59.927705    1526 scope.go:117] "RemoveContainer" containerID="8df0aa93e513af9c0bcf3406c35e08397ac112010dde58eb7607313986c8671d"
	May 28 21:39:59 addons-504712 kubelet[1526]: E0528 21:39:59.927982    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-bp6tp_default(a28bb037-057e-4eeb-9aa1-aea54533e1ee)\"" pod="default/hello-world-app-86c47465fc-bp6tp" podUID="a28bb037-057e-4eeb-9aa1-aea54533e1ee"
	May 28 21:40:12 addons-504712 kubelet[1526]: I0528 21:40:12.927782    1526 scope.go:117] "RemoveContainer" containerID="8df0aa93e513af9c0bcf3406c35e08397ac112010dde58eb7607313986c8671d"
	May 28 21:40:12 addons-504712 kubelet[1526]: E0528 21:40:12.928093    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-bp6tp_default(a28bb037-057e-4eeb-9aa1-aea54533e1ee)\"" pod="default/hello-world-app-86c47465fc-bp6tp" podUID="a28bb037-057e-4eeb-9aa1-aea54533e1ee"
	May 28 21:40:26 addons-504712 kubelet[1526]: I0528 21:40:26.928179    1526 scope.go:117] "RemoveContainer" containerID="8df0aa93e513af9c0bcf3406c35e08397ac112010dde58eb7607313986c8671d"
	May 28 21:40:26 addons-504712 kubelet[1526]: E0528 21:40:26.928459    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-bp6tp_default(a28bb037-057e-4eeb-9aa1-aea54533e1ee)\"" pod="default/hello-world-app-86c47465fc-bp6tp" podUID="a28bb037-057e-4eeb-9aa1-aea54533e1ee"
	May 28 21:40:40 addons-504712 kubelet[1526]: I0528 21:40:40.927520    1526 scope.go:117] "RemoveContainer" containerID="8df0aa93e513af9c0bcf3406c35e08397ac112010dde58eb7607313986c8671d"
	May 28 21:40:40 addons-504712 kubelet[1526]: E0528 21:40:40.927813    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-bp6tp_default(a28bb037-057e-4eeb-9aa1-aea54533e1ee)\"" pod="default/hello-world-app-86c47465fc-bp6tp" podUID="a28bb037-057e-4eeb-9aa1-aea54533e1ee"
	May 28 21:40:49 addons-504712 kubelet[1526]: I0528 21:40:49.120367    1526 scope.go:117] "RemoveContainer" containerID="6b48727f5c2722b0ccf2577211288a29d980321a1d3c1ca3c85be4884d0ad38c"
	May 28 21:40:49 addons-504712 kubelet[1526]: I0528 21:40:49.171057    1526 scope.go:117] "RemoveContainer" containerID="6b48727f5c2722b0ccf2577211288a29d980321a1d3c1ca3c85be4884d0ad38c"
	May 28 21:40:49 addons-504712 kubelet[1526]: E0528 21:40:49.171749    1526 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b48727f5c2722b0ccf2577211288a29d980321a1d3c1ca3c85be4884d0ad38c\": container with ID starting with 6b48727f5c2722b0ccf2577211288a29d980321a1d3c1ca3c85be4884d0ad38c not found: ID does not exist" containerID="6b48727f5c2722b0ccf2577211288a29d980321a1d3c1ca3c85be4884d0ad38c"
	May 28 21:40:49 addons-504712 kubelet[1526]: I0528 21:40:49.171795    1526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b48727f5c2722b0ccf2577211288a29d980321a1d3c1ca3c85be4884d0ad38c"} err="failed to get container status \"6b48727f5c2722b0ccf2577211288a29d980321a1d3c1ca3c85be4884d0ad38c\": rpc error: code = NotFound desc = could not find container \"6b48727f5c2722b0ccf2577211288a29d980321a1d3c1ca3c85be4884d0ad38c\": container with ID starting with 6b48727f5c2722b0ccf2577211288a29d980321a1d3c1ca3c85be4884d0ad38c not found: ID does not exist"
	May 28 21:40:49 addons-504712 kubelet[1526]: I0528 21:40:49.206497    1526 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cppq\" (UniqueName: \"kubernetes.io/projected/6c20ca2e-5167-4501-8529-d317230ce330-kube-api-access-7cppq\") pod \"6c20ca2e-5167-4501-8529-d317230ce330\" (UID: \"6c20ca2e-5167-4501-8529-d317230ce330\") "
	May 28 21:40:49 addons-504712 kubelet[1526]: I0528 21:40:49.206554    1526 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6c20ca2e-5167-4501-8529-d317230ce330-tmp-dir\") pod \"6c20ca2e-5167-4501-8529-d317230ce330\" (UID: \"6c20ca2e-5167-4501-8529-d317230ce330\") "
	May 28 21:40:49 addons-504712 kubelet[1526]: I0528 21:40:49.206901    1526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c20ca2e-5167-4501-8529-d317230ce330-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6c20ca2e-5167-4501-8529-d317230ce330" (UID: "6c20ca2e-5167-4501-8529-d317230ce330"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	May 28 21:40:49 addons-504712 kubelet[1526]: I0528 21:40:49.209202    1526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c20ca2e-5167-4501-8529-d317230ce330-kube-api-access-7cppq" (OuterVolumeSpecName: "kube-api-access-7cppq") pod "6c20ca2e-5167-4501-8529-d317230ce330" (UID: "6c20ca2e-5167-4501-8529-d317230ce330"). InnerVolumeSpecName "kube-api-access-7cppq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 28 21:40:49 addons-504712 kubelet[1526]: I0528 21:40:49.307488    1526 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7cppq\" (UniqueName: \"kubernetes.io/projected/6c20ca2e-5167-4501-8529-d317230ce330-kube-api-access-7cppq\") on node \"addons-504712\" DevicePath \"\""
	May 28 21:40:49 addons-504712 kubelet[1526]: I0528 21:40:49.307535    1526 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6c20ca2e-5167-4501-8529-d317230ce330-tmp-dir\") on node \"addons-504712\" DevicePath \"\""
	
	
	==> storage-provisioner [2c6bd74546fd14a99d77c44d780b2f3861328b347ffad4348ed7d992edbe4b84] <==
	I0528 21:32:44.664134       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 21:32:44.680011       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 21:32:44.680055       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 21:32:44.690343       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 21:32:44.691008       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-504712_146d4151-1cbb-4b0d-a24c-cd489a309f6b!
	I0528 21:32:44.691778       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a1bb1130-8ffe-43f7-a0d5-c9295411015b", APIVersion:"v1", ResourceVersion:"912", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-504712_146d4151-1cbb-4b0d-a24c-cd489a309f6b became leader
	I0528 21:32:44.791167       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-504712_146d4151-1cbb-4b0d-a24c-cd489a309f6b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-504712 -n addons-504712
helpers_test.go:261: (dbg) Run:  kubectl --context addons-504712 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (366.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (372.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-137556 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0528 22:24:22.540685 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 22:24:28.138509 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-137556 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 102 (6m9.166653624s)

                                                
                                                
-- stdout --
	* [old-k8s-version-137556] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-137556" primary control-plane node in "old-k8s-version-137556" cluster
	* Pulling base image v0.0.44-1716228441-18934 ...
	* Restarting existing docker container for "old-k8s-version-137556" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-137556 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 22:23:34.595516 1540905 out.go:291] Setting OutFile to fd 1 ...
	I0528 22:23:34.595712 1540905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:23:34.595739 1540905 out.go:304] Setting ErrFile to fd 2...
	I0528 22:23:34.595758 1540905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:23:34.596008 1540905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
	I0528 22:23:34.596382 1540905 out.go:298] Setting JSON to false
	I0528 22:23:34.597304 1540905 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21963,"bootTime":1716913052,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0528 22:23:34.597393 1540905 start.go:139] virtualization:  
	I0528 22:23:34.599792 1540905 out.go:177] * [old-k8s-version-137556] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0528 22:23:34.602075 1540905 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 22:23:34.603686 1540905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 22:23:34.602152 1540905 notify.go:220] Checking for updates...
	I0528 22:23:34.607038 1540905 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	I0528 22:23:34.609134 1540905 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	I0528 22:23:34.610645 1540905 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0528 22:23:34.612326 1540905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 22:23:34.614444 1540905 config.go:182] Loaded profile config "old-k8s-version-137556": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0528 22:23:34.616704 1540905 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0528 22:23:34.618596 1540905 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 22:23:34.640092 1540905 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 22:23:34.640208 1540905 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 22:23:34.728024 1540905 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:59 SystemTime:2024-05-28 22:23:34.718978421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 22:23:34.728140 1540905 docker.go:295] overlay module found
	I0528 22:23:34.731109 1540905 out.go:177] * Using the docker driver based on existing profile
	I0528 22:23:34.733406 1540905 start.go:297] selected driver: docker
	I0528 22:23:34.733427 1540905 start.go:901] validating driver "docker" against &{Name:old-k8s-version-137556 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-137556 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:23:34.733534 1540905 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 22:23:34.734221 1540905 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 22:23:34.828328 1540905 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:59 SystemTime:2024-05-28 22:23:34.816395389 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 22:23:34.828784 1540905 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 22:23:34.828818 1540905 cni.go:84] Creating CNI manager for ""
	I0528 22:23:34.828831 1540905 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0528 22:23:34.828875 1540905 start.go:340] cluster config:
	{Name:old-k8s-version-137556 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-137556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:23:34.831613 1540905 out.go:177] * Starting "old-k8s-version-137556" primary control-plane node in "old-k8s-version-137556" cluster
	I0528 22:23:34.833699 1540905 cache.go:121] Beginning downloading kic base image for docker with crio
	I0528 22:23:34.836731 1540905 out.go:177] * Pulling base image v0.0.44-1716228441-18934 ...
	I0528 22:23:34.838934 1540905 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0528 22:23:34.838988 1540905 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0528 22:23:34.838996 1540905 cache.go:56] Caching tarball of preloaded images
	I0528 22:23:34.839095 1540905 preload.go:173] Found /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0528 22:23:34.839105 1540905 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0528 22:23:34.839228 1540905 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/config.json ...
	I0528 22:23:34.839463 1540905 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon
	I0528 22:23:34.855004 1540905 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon, skipping pull
	I0528 22:23:34.855026 1540905 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 exists in daemon, skipping load
	I0528 22:23:34.855055 1540905 cache.go:194] Successfully downloaded all kic artifacts
	I0528 22:23:34.855084 1540905 start.go:360] acquireMachinesLock for old-k8s-version-137556: {Name:mk2c00f24a3d04faf9982ed0805071e70a489b60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:23:34.855144 1540905 start.go:364] duration metric: took 39.548µs to acquireMachinesLock for "old-k8s-version-137556"
	I0528 22:23:34.855164 1540905 start.go:96] Skipping create...Using existing machine configuration
	I0528 22:23:34.855177 1540905 fix.go:54] fixHost starting: 
	I0528 22:23:34.855660 1540905 cli_runner.go:164] Run: docker container inspect old-k8s-version-137556 --format={{.State.Status}}
	I0528 22:23:34.879573 1540905 fix.go:112] recreateIfNeeded on old-k8s-version-137556: state=Stopped err=<nil>
	W0528 22:23:34.879601 1540905 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 22:23:34.882748 1540905 out.go:177] * Restarting existing docker container for "old-k8s-version-137556" ...
	I0528 22:23:34.884948 1540905 cli_runner.go:164] Run: docker start old-k8s-version-137556
	I0528 22:23:35.244793 1540905 cli_runner.go:164] Run: docker container inspect old-k8s-version-137556 --format={{.State.Status}}
	I0528 22:23:35.261420 1540905 kic.go:430] container "old-k8s-version-137556" state is running.
	I0528 22:23:35.261816 1540905 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-137556
	I0528 22:23:35.287358 1540905 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/config.json ...
	I0528 22:23:35.287590 1540905 machine.go:94] provisionDockerMachine start ...
	I0528 22:23:35.287656 1540905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-137556
	I0528 22:23:35.312878 1540905 main.go:141] libmachine: Using SSH client type: native
	I0528 22:23:35.313144 1540905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I0528 22:23:35.313159 1540905 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 22:23:35.313859 1540905 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0528 22:23:38.470318 1540905 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-137556
	
	I0528 22:23:38.470344 1540905 ubuntu.go:169] provisioning hostname "old-k8s-version-137556"
	I0528 22:23:38.470446 1540905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-137556
	I0528 22:23:38.546503 1540905 main.go:141] libmachine: Using SSH client type: native
	I0528 22:23:38.547469 1540905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I0528 22:23:38.547488 1540905 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-137556 && echo "old-k8s-version-137556" | sudo tee /etc/hostname
	I0528 22:23:38.736393 1540905 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-137556
	
	I0528 22:23:38.736480 1540905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-137556
	I0528 22:23:38.775756 1540905 main.go:141] libmachine: Using SSH client type: native
	I0528 22:23:38.776005 1540905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I0528 22:23:38.776028 1540905 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-137556' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-137556/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-137556' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 22:23:38.971016 1540905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 22:23:38.971052 1540905 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18966-1349783/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-1349783/.minikube}
	I0528 22:23:38.971085 1540905 ubuntu.go:177] setting up certificates
	I0528 22:23:38.971097 1540905 provision.go:84] configureAuth start
	I0528 22:23:38.971194 1540905 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-137556
	I0528 22:23:39.011757 1540905 provision.go:143] copyHostCerts
	I0528 22:23:39.011834 1540905 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.pem, removing ...
	I0528 22:23:39.011847 1540905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.pem
	I0528 22:23:39.011938 1540905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.pem (1082 bytes)
	I0528 22:23:39.012035 1540905 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-1349783/.minikube/cert.pem, removing ...
	I0528 22:23:39.012047 1540905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-1349783/.minikube/cert.pem
	I0528 22:23:39.012080 1540905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-1349783/.minikube/cert.pem (1123 bytes)
	I0528 22:23:39.012186 1540905 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-1349783/.minikube/key.pem, removing ...
	I0528 22:23:39.012196 1540905 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-1349783/.minikube/key.pem
	I0528 22:23:39.012226 1540905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-1349783/.minikube/key.pem (1675 bytes)
	I0528 22:23:39.012277 1540905 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-137556 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-137556]
	I0528 22:23:39.351123 1540905 provision.go:177] copyRemoteCerts
	I0528 22:23:39.351200 1540905 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 22:23:39.351251 1540905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-137556
	I0528 22:23:39.372776 1540905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/old-k8s-version-137556/id_rsa Username:docker}
	I0528 22:23:39.465452 1540905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 22:23:39.492905 1540905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0528 22:23:39.520784 1540905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 22:23:39.547728 1540905 provision.go:87] duration metric: took 576.616647ms to configureAuth
	I0528 22:23:39.547757 1540905 ubuntu.go:193] setting minikube options for container-runtime
	I0528 22:23:39.547946 1540905 config.go:182] Loaded profile config "old-k8s-version-137556": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0528 22:23:39.548051 1540905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-137556
	I0528 22:23:39.566340 1540905 main.go:141] libmachine: Using SSH client type: native
	I0528 22:23:39.566597 1540905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I0528 22:23:39.566619 1540905 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 22:23:40.023948 1540905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 22:23:40.023978 1540905 machine.go:97] duration metric: took 4.736370738s to provisionDockerMachine
	I0528 22:23:40.023990 1540905 start.go:293] postStartSetup for "old-k8s-version-137556" (driver="docker")
	I0528 22:23:40.024002 1540905 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 22:23:40.024069 1540905 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 22:23:40.024115 1540905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-137556
	I0528 22:23:40.061084 1540905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/old-k8s-version-137556/id_rsa Username:docker}
	I0528 22:23:40.171999 1540905 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 22:23:40.177519 1540905 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0528 22:23:40.177558 1540905 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0528 22:23:40.177570 1540905 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0528 22:23:40.177577 1540905 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0528 22:23:40.177590 1540905 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1349783/.minikube/addons for local assets ...
	I0528 22:23:40.177649 1540905 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1349783/.minikube/files for local assets ...
	I0528 22:23:40.177734 1540905 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-1349783/.minikube/files/etc/ssl/certs/13551972.pem -> 13551972.pem in /etc/ssl/certs
	I0528 22:23:40.177881 1540905 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 22:23:40.189319 1540905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/files/etc/ssl/certs/13551972.pem --> /etc/ssl/certs/13551972.pem (1708 bytes)
	I0528 22:23:40.227585 1540905 start.go:296] duration metric: took 203.580636ms for postStartSetup
	I0528 22:23:40.227669 1540905 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 22:23:40.227715 1540905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-137556
	I0528 22:23:40.251134 1540905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/old-k8s-version-137556/id_rsa Username:docker}
	I0528 22:23:40.339350 1540905 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0528 22:23:40.343854 1540905 fix.go:56] duration metric: took 5.488676702s for fixHost
	I0528 22:23:40.343882 1540905 start.go:83] releasing machines lock for "old-k8s-version-137556", held for 5.48872704s
	I0528 22:23:40.343949 1540905 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-137556
	I0528 22:23:40.366265 1540905 ssh_runner.go:195] Run: cat /version.json
	I0528 22:23:40.366324 1540905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-137556
	I0528 22:23:40.366568 1540905 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 22:23:40.366631 1540905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-137556
	I0528 22:23:40.400045 1540905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/old-k8s-version-137556/id_rsa Username:docker}
	I0528 22:23:40.400647 1540905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/old-k8s-version-137556/id_rsa Username:docker}
	I0528 22:23:40.621301 1540905 ssh_runner.go:195] Run: systemctl --version
	I0528 22:23:40.626636 1540905 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 22:23:40.781367 1540905 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 22:23:40.785926 1540905 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 22:23:40.795264 1540905 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0528 22:23:40.795341 1540905 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 22:23:40.805010 1540905 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0528 22:23:40.805038 1540905 start.go:494] detecting cgroup driver to use...
	I0528 22:23:40.805070 1540905 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0528 22:23:40.805118 1540905 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 22:23:40.817168 1540905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 22:23:40.829811 1540905 docker.go:217] disabling cri-docker service (if available) ...
	I0528 22:23:40.829894 1540905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 22:23:40.844809 1540905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 22:23:40.856838 1540905 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 22:23:40.970031 1540905 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 22:23:41.075165 1540905 docker.go:233] disabling docker service ...
	I0528 22:23:41.075243 1540905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 22:23:41.089723 1540905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 22:23:41.102619 1540905 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 22:23:41.216889 1540905 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 22:23:41.340423 1540905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 22:23:41.353416 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 22:23:41.370946 1540905 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0528 22:23:41.371068 1540905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:23:41.381589 1540905 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 22:23:41.381725 1540905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:23:41.393372 1540905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:23:41.404550 1540905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:23:41.415180 1540905 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 22:23:41.424850 1540905 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 22:23:41.434545 1540905 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 22:23:41.443966 1540905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:23:41.553700 1540905 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 22:23:42.248763 1540905 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 22:23:42.248944 1540905 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 22:23:42.253122 1540905 start.go:562] Will wait 60s for crictl version
	I0528 22:23:42.253234 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:23:42.256948 1540905 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 22:23:42.309543 1540905 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0528 22:23:42.309661 1540905 ssh_runner.go:195] Run: crio --version
	I0528 22:23:42.359879 1540905 ssh_runner.go:195] Run: crio --version
	I0528 22:23:42.419569 1540905 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0528 22:23:42.421573 1540905 cli_runner.go:164] Run: docker network inspect old-k8s-version-137556 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0528 22:23:42.442451 1540905 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0528 22:23:42.447462 1540905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 22:23:42.458273 1540905 kubeadm.go:877] updating cluster {Name:old-k8s-version-137556 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-137556 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 22:23:42.458436 1540905 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0528 22:23:42.458518 1540905 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 22:23:42.513391 1540905 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 22:23:42.513415 1540905 crio.go:433] Images already preloaded, skipping extraction
	I0528 22:23:42.513470 1540905 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 22:23:42.554094 1540905 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 22:23:42.554118 1540905 cache_images.go:84] Images are preloaded, skipping loading
	I0528 22:23:42.554126 1540905 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.20.0 crio true true} ...
	I0528 22:23:42.554240 1540905 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-137556 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-137556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 22:23:42.554324 1540905 ssh_runner.go:195] Run: crio config
	I0528 22:23:42.620481 1540905 cni.go:84] Creating CNI manager for ""
	I0528 22:23:42.620502 1540905 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0528 22:23:42.620518 1540905 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 22:23:42.620545 1540905 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-137556 NodeName:old-k8s-version-137556 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0528 22:23:42.620713 1540905 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-137556"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 22:23:42.620816 1540905 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0528 22:23:42.629939 1540905 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 22:23:42.630033 1540905 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 22:23:42.639522 1540905 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0528 22:23:42.658193 1540905 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 22:23:42.676618 1540905 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0528 22:23:42.694571 1540905 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0528 22:23:42.698350 1540905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 22:23:42.708655 1540905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:23:42.803236 1540905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:23:42.817937 1540905 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556 for IP: 192.168.85.2
	I0528 22:23:42.818007 1540905 certs.go:194] generating shared ca certs ...
	I0528 22:23:42.818059 1540905 certs.go:226] acquiring lock for ca certs: {Name:mk3b01431a293453662fa80a6161920f23c6c736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:23:42.818252 1540905 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.key
	I0528 22:23:42.818335 1540905 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.key
	I0528 22:23:42.818372 1540905 certs.go:256] generating profile certs ...
	I0528 22:23:42.818500 1540905 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.key
	I0528 22:23:42.818623 1540905 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/apiserver.key.984daade
	I0528 22:23:42.818709 1540905 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/proxy-client.key
	I0528 22:23:42.818882 1540905 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/1355197.pem (1338 bytes)
	W0528 22:23:42.818957 1540905 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/1355197_empty.pem, impossibly tiny 0 bytes
	I0528 22:23:42.818995 1540905 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca-key.pem (1679 bytes)
	I0528 22:23:42.819049 1540905 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem (1082 bytes)
	I0528 22:23:42.819106 1540905 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/cert.pem (1123 bytes)
	I0528 22:23:42.819165 1540905 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/key.pem (1675 bytes)
	I0528 22:23:42.819257 1540905 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/files/etc/ssl/certs/13551972.pem (1708 bytes)
	I0528 22:23:42.820102 1540905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 22:23:42.844947 1540905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 22:23:42.870136 1540905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 22:23:42.916516 1540905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 22:23:42.958812 1540905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0528 22:23:42.982057 1540905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 22:23:43.007463 1540905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 22:23:43.044148 1540905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 22:23:43.089564 1540905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/1355197.pem --> /usr/share/ca-certificates/1355197.pem (1338 bytes)
	I0528 22:23:43.117543 1540905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/files/etc/ssl/certs/13551972.pem --> /usr/share/ca-certificates/13551972.pem (1708 bytes)
	I0528 22:23:43.145287 1540905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 22:23:43.172661 1540905 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 22:23:43.193499 1540905 ssh_runner.go:195] Run: openssl version
	I0528 22:23:43.199335 1540905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13551972.pem && ln -fs /usr/share/ca-certificates/13551972.pem /etc/ssl/certs/13551972.pem"
	I0528 22:23:43.209421 1540905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13551972.pem
	I0528 22:23:43.212925 1540905 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 21:42 /usr/share/ca-certificates/13551972.pem
	I0528 22:23:43.213038 1540905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13551972.pem
	I0528 22:23:43.220098 1540905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13551972.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 22:23:43.230471 1540905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 22:23:43.241087 1540905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:23:43.245194 1540905 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 21:31 /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:23:43.245254 1540905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:23:43.253058 1540905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 22:23:43.263559 1540905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355197.pem && ln -fs /usr/share/ca-certificates/1355197.pem /etc/ssl/certs/1355197.pem"
	I0528 22:23:43.274177 1540905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355197.pem
	I0528 22:23:43.278304 1540905 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 21:42 /usr/share/ca-certificates/1355197.pem
	I0528 22:23:43.278363 1540905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355197.pem
	I0528 22:23:43.285731 1540905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1355197.pem /etc/ssl/certs/51391683.0"
	I0528 22:23:43.295743 1540905 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 22:23:43.299816 1540905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 22:23:43.307205 1540905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 22:23:43.314632 1540905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 22:23:43.321940 1540905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 22:23:43.329236 1540905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 22:23:43.336412 1540905 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 22:23:43.343612 1540905 kubeadm.go:391] StartCluster: {Name:old-k8s-version-137556 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-137556 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:23:43.343709 1540905 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 22:23:43.343765 1540905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 22:23:43.401070 1540905 cri.go:89] found id: ""
	I0528 22:23:43.401166 1540905 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0528 22:23:43.412801 1540905 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0528 22:23:43.412823 1540905 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0528 22:23:43.412832 1540905 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0528 22:23:43.412889 1540905 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0528 22:23:43.423703 1540905 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0528 22:23:43.424164 1540905 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-137556" does not appear in /home/jenkins/minikube-integration/18966-1349783/kubeconfig
	I0528 22:23:43.424293 1540905 kubeconfig.go:62] /home/jenkins/minikube-integration/18966-1349783/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-137556" cluster setting kubeconfig missing "old-k8s-version-137556" context setting]
	I0528 22:23:43.424626 1540905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/kubeconfig: {Name:mkaf5e1534f034576a412c2bb12acf3530c82fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:23:43.426437 1540905 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0528 22:23:43.437846 1540905 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0528 22:23:43.437880 1540905 kubeadm.go:591] duration metric: took 25.041806ms to restartPrimaryControlPlane
	I0528 22:23:43.437894 1540905 kubeadm.go:393] duration metric: took 94.292091ms to StartCluster
	I0528 22:23:43.437909 1540905 settings.go:142] acquiring lock: {Name:mk3ead4661b05edfaa64061283a93c6a76969cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:23:43.437977 1540905 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-1349783/kubeconfig
	I0528 22:23:43.438650 1540905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/kubeconfig: {Name:mkaf5e1534f034576a412c2bb12acf3530c82fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:23:43.438874 1540905 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 22:23:43.442369 1540905 out.go:177] * Verifying Kubernetes components...
	I0528 22:23:43.439292 1540905 config.go:182] Loaded profile config "old-k8s-version-137556": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0528 22:23:43.439251 1540905 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 22:23:43.443890 1540905 addons.go:69] Setting dashboard=true in profile "old-k8s-version-137556"
	I0528 22:23:43.443918 1540905 addons.go:234] Setting addon dashboard=true in "old-k8s-version-137556"
	W0528 22:23:43.443925 1540905 addons.go:243] addon dashboard should already be in state true
	I0528 22:23:43.443954 1540905 host.go:66] Checking if "old-k8s-version-137556" exists ...
	I0528 22:23:43.444398 1540905 cli_runner.go:164] Run: docker container inspect old-k8s-version-137556 --format={{.State.Status}}
	I0528 22:23:43.444559 1540905 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-137556"
	I0528 22:23:43.444599 1540905 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-137556"
	W0528 22:23:43.444606 1540905 addons.go:243] addon storage-provisioner should already be in state true
	I0528 22:23:43.444627 1540905 host.go:66] Checking if "old-k8s-version-137556" exists ...
	I0528 22:23:43.445001 1540905 cli_runner.go:164] Run: docker container inspect old-k8s-version-137556 --format={{.State.Status}}
	I0528 22:23:43.449278 1540905 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-137556"
	I0528 22:23:43.449314 1540905 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-137556"
	I0528 22:23:43.449369 1540905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:23:43.449629 1540905 cli_runner.go:164] Run: docker container inspect old-k8s-version-137556 --format={{.State.Status}}
	I0528 22:23:43.450153 1540905 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-137556"
	I0528 22:23:43.450179 1540905 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-137556"
	W0528 22:23:43.450186 1540905 addons.go:243] addon metrics-server should already be in state true
	I0528 22:23:43.450212 1540905 host.go:66] Checking if "old-k8s-version-137556" exists ...
	I0528 22:23:43.450707 1540905 cli_runner.go:164] Run: docker container inspect old-k8s-version-137556 --format={{.State.Status}}
	I0528 22:23:43.496250 1540905 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0528 22:23:43.503523 1540905 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0528 22:23:43.503554 1540905 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0528 22:23:43.505410 1540905 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 22:23:43.503820 1540905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-137556
	I0528 22:23:43.518087 1540905 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0528 22:23:43.510279 1540905 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:23:43.521838 1540905 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0528 22:23:43.519921 1540905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 22:23:43.521048 1540905 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-137556"
	W0528 22:23:43.523459 1540905 addons.go:243] addon default-storageclass should already be in state true
	I0528 22:23:43.523463 1540905 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0528 22:23:43.523488 1540905 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0528 22:23:43.523492 1540905 host.go:66] Checking if "old-k8s-version-137556" exists ...
	I0528 22:23:43.523558 1540905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-137556
	I0528 22:23:43.523922 1540905 cli_runner.go:164] Run: docker container inspect old-k8s-version-137556 --format={{.State.Status}}
	I0528 22:23:43.524175 1540905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-137556
	I0528 22:23:43.547202 1540905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/old-k8s-version-137556/id_rsa Username:docker}
	I0528 22:23:43.579720 1540905 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 22:23:43.579740 1540905 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 22:23:43.579801 1540905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-137556
	I0528 22:23:43.622180 1540905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/old-k8s-version-137556/id_rsa Username:docker}
	I0528 22:23:43.622208 1540905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/old-k8s-version-137556/id_rsa Username:docker}
	I0528 22:23:43.631899 1540905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/old-k8s-version-137556/id_rsa Username:docker}
	I0528 22:23:43.751825 1540905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:23:43.805452 1540905 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-137556" to be "Ready" ...
	I0528 22:23:43.814743 1540905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0528 22:23:43.814763 1540905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0528 22:23:43.822565 1540905 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0528 22:23:43.822589 1540905 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0528 22:23:43.868748 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:23:43.883690 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:23:43.912264 1540905 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0528 22:23:43.912338 1540905 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0528 22:23:43.913200 1540905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0528 22:23:43.913239 1540905 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0528 22:23:44.040611 1540905 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0528 22:23:44.040687 1540905 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0528 22:23:44.046482 1540905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:23:44.046561 1540905 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0528 22:23:44.141465 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:23:44.195449 1540905 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0528 22:23:44.195518 1540905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0528 22:23:44.210842 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:44.210958 1540905 retry.go:31] will retry after 180.35272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0528 22:23:44.272549 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:44.272629 1540905 retry.go:31] will retry after 288.281649ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:44.284484 1540905 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0528 22:23:44.284557 1540905 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0528 22:23:44.313612 1540905 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0528 22:23:44.313640 1540905 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0528 22:23:44.370304 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:44.370339 1540905 retry.go:31] will retry after 129.224302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:44.371251 1540905 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0528 22:23:44.371273 1540905 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0528 22:23:44.391442 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:23:44.404483 1540905 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0528 22:23:44.404507 1540905 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0528 22:23:44.472373 1540905 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0528 22:23:44.472422 1540905 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0528 22:23:44.500343 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:23:44.553025 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0528 22:23:44.561699 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 22:23:44.609423 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:44.609458 1540905 retry.go:31] will retry after 261.069566ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0528 22:23:44.824797 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0528 22:23:44.824917 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:44.824947 1540905 retry.go:31] will retry after 310.857777ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:44.824832 1540905 retry.go:31] will retry after 468.850958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0528 22:23:44.825259 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:44.825278 1540905 retry.go:31] will retry after 503.54013ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:44.871599 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0528 22:23:44.963356 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:44.963387 1540905 retry.go:31] will retry after 547.819511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:45.136865 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0528 22:23:45.244461 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:45.244547 1540905 retry.go:31] will retry after 562.457992ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:45.294859 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:23:45.329725 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 22:23:45.428528 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:45.428562 1540905 retry.go:31] will retry after 516.034734ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0528 22:23:45.491399 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:45.491433 1540905 retry.go:31] will retry after 752.230555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:45.511681 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0528 22:23:45.617335 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:45.617363 1540905 retry.go:31] will retry after 1.038992506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:45.807043 1540905 node_ready.go:53] error getting node "old-k8s-version-137556": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-137556": dial tcp 192.168.85.2:8443: connect: connection refused
	I0528 22:23:45.807139 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0528 22:23:45.891076 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:45.891110 1540905 retry.go:31] will retry after 576.186797ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:45.945282 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0528 22:23:46.041053 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:46.041100 1540905 retry.go:31] will retry after 837.107834ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:46.244576 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 22:23:46.348847 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:46.348876 1540905 retry.go:31] will retry after 953.190511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:46.468136 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0528 22:23:46.572021 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:46.572153 1540905 retry.go:31] will retry after 479.607386ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:46.657216 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0528 22:23:46.770835 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:46.770967 1540905 retry.go:31] will retry after 880.017504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:46.878744 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0528 22:23:47.005127 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:47.005209 1540905 retry.go:31] will retry after 1.831529389s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:47.052400 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0528 22:23:47.149143 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:47.149253 1540905 retry.go:31] will retry after 1.449851435s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:47.302538 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 22:23:47.399664 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:47.399740 1540905 retry.go:31] will retry after 1.521086778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:47.652219 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0528 22:23:47.738570 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:47.738605 1540905 retry.go:31] will retry after 1.932102836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:48.306097 1540905 node_ready.go:53] error getting node "old-k8s-version-137556": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-137556": dial tcp 192.168.85.2:8443: connect: connection refused
	I0528 22:23:48.599722 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0528 22:23:48.735285 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:48.735323 1540905 retry.go:31] will retry after 2.60272942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:48.837257 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:23:48.921649 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 22:23:48.945551 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:48.945585 1540905 retry.go:31] will retry after 2.839374882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0528 22:23:49.025761 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:49.025796 1540905 retry.go:31] will retry after 1.914360905s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:49.671259 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0528 22:23:49.769523 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:49.769551 1540905 retry.go:31] will retry after 1.689950637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:50.806101 1540905 node_ready.go:53] error getting node "old-k8s-version-137556": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-137556": dial tcp 192.168.85.2:8443: connect: connection refused
	I0528 22:23:50.940369 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 22:23:51.054899 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:51.054956 1540905 retry.go:31] will retry after 3.003281468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:51.338895 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0528 22:23:51.460517 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0528 22:23:51.641696 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:51.641738 1540905 retry.go:31] will retry after 3.176506053s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0528 22:23:51.714974 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:51.715011 1540905 retry.go:31] will retry after 6.391394852s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:51.785328 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0528 22:23:51.913885 1540905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:51.913994 1540905 retry.go:31] will retry after 1.481274555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:23:53.396468 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:23:54.059041 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:23:54.818820 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0528 22:23:58.106606 1540905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:24:03.189264 1540905 node_ready.go:49] node "old-k8s-version-137556" has status "Ready":"True"
	I0528 22:24:03.189288 1540905 node_ready.go:38] duration metric: took 19.383760468s for node "old-k8s-version-137556" to be "Ready" ...
	I0528 22:24:03.189301 1540905 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:24:03.568080 1540905 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-rkctv" in "kube-system" namespace to be "Ready" ...
	I0528 22:24:04.156013 1540905 pod_ready.go:92] pod "coredns-74ff55c5b-rkctv" in "kube-system" namespace has status "Ready":"True"
	I0528 22:24:04.156036 1540905 pod_ready.go:81] duration metric: took 587.87846ms for pod "coredns-74ff55c5b-rkctv" in "kube-system" namespace to be "Ready" ...
	I0528 22:24:04.156048 1540905 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-137556" in "kube-system" namespace to be "Ready" ...
	I0528 22:24:04.369111 1540905 pod_ready.go:92] pod "etcd-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"True"
	I0528 22:24:04.369191 1540905 pod_ready.go:81] duration metric: took 213.133487ms for pod "etcd-old-k8s-version-137556" in "kube-system" namespace to be "Ready" ...
	I0528 22:24:04.369219 1540905 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-137556" in "kube-system" namespace to be "Ready" ...
	I0528 22:24:04.439403 1540905 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"True"
	I0528 22:24:04.439473 1540905 pod_ready.go:81] duration metric: took 70.233627ms for pod "kube-apiserver-old-k8s-version-137556" in "kube-system" namespace to be "Ready" ...
	I0528 22:24:04.439498 1540905 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace to be "Ready" ...
	I0528 22:24:04.763459 1540905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.704382353s)
	I0528 22:24:04.763596 1540905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.367094061s)
	I0528 22:24:04.763646 1540905 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-137556"
	I0528 22:24:05.381839 1540905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.562975904s)
	I0528 22:24:05.383931 1540905 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-137556 addons enable metrics-server
	
	I0528 22:24:05.381996 1540905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.275365367s)
	I0528 22:24:05.409528 1540905 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0528 22:24:05.411495 1540905 addons.go:510] duration metric: took 21.972239619s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0528 22:24:06.451849 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:08.945929 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:10.949092 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:13.446094 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:15.946734 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:18.445761 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:20.446869 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:22.952513 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:25.447217 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:27.476754 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:29.955367 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:32.495602 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:34.949753 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:36.953814 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:39.445565 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:41.446225 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:43.945951 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:45.946277 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:47.947364 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:49.950331 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:52.445956 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:54.945052 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:56.946014 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:24:58.946614 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:01.446499 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:03.946841 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:05.947874 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:08.447084 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:10.946031 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:11.951732 1540905 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"True"
	I0528 22:25:11.951759 1540905 pod_ready.go:81] duration metric: took 1m7.512242141s for pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:11.951771 1540905 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8jz6w" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:11.974855 1540905 pod_ready.go:92] pod "kube-proxy-8jz6w" in "kube-system" namespace has status "Ready":"True"
	I0528 22:25:11.974883 1540905 pod_ready.go:81] duration metric: took 23.104329ms for pod "kube-proxy-8jz6w" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:11.974920 1540905 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-137556" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:13.982194 1540905 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:16.480792 1540905 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:18.482060 1540905 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:20.482153 1540905 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:20.982171 1540905 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"True"
	I0528 22:25:20.982200 1540905 pod_ready.go:81] duration metric: took 9.007259537s for pod "kube-scheduler-old-k8s-version-137556" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:20.982213 1540905 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:22.989102 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:24.989608 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:27.489518 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:29.988533 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:31.991247 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:34.488194 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:36.495271 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:38.990348 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:41.495366 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:43.521873 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:45.988876 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:47.989265 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:50.488844 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:52.488950 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:54.492792 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:56.989265 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:59.488620 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:01.488762 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:03.988913 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:06.488936 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:08.988177 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:10.988907 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:12.989263 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:15.489223 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:17.988232 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:19.988565 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:22.488224 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:24.489557 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:26.988463 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:28.990525 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:31.487940 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:33.488677 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:35.988481 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:37.988691 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:39.989047 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:42.497971 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:44.988931 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:46.988984 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:49.488471 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:51.988480 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:54.487836 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:56.488805 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:58.988626 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:00.988931 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:02.989274 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:05.487582 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:07.487769 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:09.488409 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:11.987932 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:13.993302 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:16.488026 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:18.988722 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:20.988794 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:23.488388 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:25.489230 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:27.988524 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:29.989139 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:31.989250 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:34.488910 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:36.988638 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:39.488046 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:41.488536 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:43.488804 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:45.988591 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:47.989534 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:50.488161 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:52.989006 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:55.015860 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:57.488310 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:59.488865 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:01.493382 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:03.988248 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:05.989110 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:08.492906 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:10.988521 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:12.989371 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:15.488132 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:17.988756 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:20.488836 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:22.988169 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:25.492744 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:27.987967 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:29.988864 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:31.989186 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:34.488439 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:36.488805 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:38.988909 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:41.488703 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:43.988369 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:45.988453 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:47.989135 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:49.989359 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:52.487694 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:54.488326 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:56.489000 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:58.989339 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:01.488277 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:03.988703 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:05.989324 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:07.990979 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:10.489138 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:12.989242 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:15.488807 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:17.989551 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:20.489092 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:20.988019 1540905 pod_ready.go:81] duration metric: took 4m0.005793638s for pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace to be "Ready" ...
	E0528 22:29:20.988044 1540905 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0528 22:29:20.988093 1540905 pod_ready.go:38] duration metric: took 5m17.798780687s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:29:20.988114 1540905 api_server.go:52] waiting for apiserver process to appear ...
	I0528 22:29:20.988153 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 22:29:20.988223 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 22:29:21.032451 1540905 cri.go:89] found id: "9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3"
	I0528 22:29:21.032474 1540905 cri.go:89] found id: ""
	I0528 22:29:21.032489 1540905 logs.go:276] 1 containers: [9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3]
	I0528 22:29:21.032561 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.036290 1540905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 22:29:21.036360 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 22:29:21.078257 1540905 cri.go:89] found id: "c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8"
	I0528 22:29:21.078279 1540905 cri.go:89] found id: ""
	I0528 22:29:21.078288 1540905 logs.go:276] 1 containers: [c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8]
	I0528 22:29:21.078346 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.081979 1540905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 22:29:21.082082 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 22:29:21.128720 1540905 cri.go:89] found id: "cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a"
	I0528 22:29:21.128744 1540905 cri.go:89] found id: ""
	I0528 22:29:21.128752 1540905 logs.go:276] 1 containers: [cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a]
	I0528 22:29:21.128828 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.132418 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 22:29:21.132487 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 22:29:21.182402 1540905 cri.go:89] found id: "b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105"
	I0528 22:29:21.182464 1540905 cri.go:89] found id: ""
	I0528 22:29:21.182485 1540905 logs.go:276] 1 containers: [b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105]
	I0528 22:29:21.182568 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.186351 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 22:29:21.186428 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 22:29:21.236128 1540905 cri.go:89] found id: "7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b"
	I0528 22:29:21.236152 1540905 cri.go:89] found id: ""
	I0528 22:29:21.236160 1540905 logs.go:276] 1 containers: [7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b]
	I0528 22:29:21.236250 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.239897 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 22:29:21.239973 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 22:29:21.276341 1540905 cri.go:89] found id: "41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29"
	I0528 22:29:21.276373 1540905 cri.go:89] found id: ""
	I0528 22:29:21.276382 1540905 logs.go:276] 1 containers: [41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29]
	I0528 22:29:21.276440 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.280074 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 22:29:21.280157 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 22:29:21.329000 1540905 cri.go:89] found id: "dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5"
	I0528 22:29:21.329022 1540905 cri.go:89] found id: ""
	I0528 22:29:21.329030 1540905 logs.go:276] 1 containers: [dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5]
	I0528 22:29:21.329099 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.333459 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 22:29:21.333533 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 22:29:21.376968 1540905 cri.go:89] found id: "bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527"
	I0528 22:29:21.377039 1540905 cri.go:89] found id: ""
	I0528 22:29:21.377060 1540905 logs.go:276] 1 containers: [bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527]
	I0528 22:29:21.377147 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.381185 1540905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 22:29:21.381306 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 22:29:21.421871 1540905 cri.go:89] found id: "4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33"
	I0528 22:29:21.421943 1540905 cri.go:89] found id: "05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998"
	I0528 22:29:21.421962 1540905 cri.go:89] found id: ""
	I0528 22:29:21.421984 1540905 logs.go:276] 2 containers: [4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33 05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998]
	I0528 22:29:21.422120 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.425791 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.429266 1540905 logs.go:123] Gathering logs for dmesg ...
	I0528 22:29:21.429294 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 22:29:21.449313 1540905 logs.go:123] Gathering logs for kube-apiserver [9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3] ...
	I0528 22:29:21.449351 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3"
	I0528 22:29:21.519486 1540905 logs.go:123] Gathering logs for etcd [c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8] ...
	I0528 22:29:21.519521 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8"
	I0528 22:29:21.569520 1540905 logs.go:123] Gathering logs for coredns [cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a] ...
	I0528 22:29:21.569549 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a"
	I0528 22:29:21.613509 1540905 logs.go:123] Gathering logs for kube-controller-manager [41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29] ...
	I0528 22:29:21.613536 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29"
	I0528 22:29:21.686097 1540905 logs.go:123] Gathering logs for describe nodes ...
	I0528 22:29:21.686137 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 22:29:21.831954 1540905 logs.go:123] Gathering logs for storage-provisioner [4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33] ...
	I0528 22:29:21.831985 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33"
	I0528 22:29:21.881237 1540905 logs.go:123] Gathering logs for kube-scheduler [b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105] ...
	I0528 22:29:21.881266 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105"
	I0528 22:29:21.927283 1540905 logs.go:123] Gathering logs for kindnet [dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5] ...
	I0528 22:29:21.927310 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5"
	I0528 22:29:21.967506 1540905 logs.go:123] Gathering logs for storage-provisioner [05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998] ...
	I0528 22:29:21.967537 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998"
	I0528 22:29:22.013255 1540905 logs.go:123] Gathering logs for kubelet ...
	I0528 22:29:22.013284 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 22:29:22.069448 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.186601     738 reflector.go:138] object-"kube-system"/"kindnet-token-ztnvd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ztnvd" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.069707 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.187153     738 reflector.go:138] object-"kube-system"/"kube-proxy-token-cs2pc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-cs2pc" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.069920 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.187714     738 reflector.go:138] object-"default"/"default-token-k8l4j": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-k8l4j" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.070157 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.194216     738 reflector.go:138] object-"kube-system"/"storage-provisioner-token-cswbw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-cswbw" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.070359 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.199146     738 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.070571 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.199216     738 reflector.go:138] object-"kube-system"/"coredns-token-q7qdg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-q7qdg" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.070775 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.199262     738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.075454 1540905 logs.go:138] Found kubelet problem: May 28 22:24:04 old-k8s-version-137556 kubelet[738]: E0528 22:24:04.243616     738 reflector.go:138] object-"kube-system"/"metrics-server-token-42d6m": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-42d6m" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.079777 1540905 logs.go:138] Found kubelet problem: May 28 22:24:06 old-k8s-version-137556 kubelet[738]: E0528 22:24:06.119557     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:22.079965 1540905 logs.go:138] Found kubelet problem: May 28 22:24:06 old-k8s-version-137556 kubelet[738]: E0528 22:24:06.781597     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.082067 1540905 logs.go:138] Found kubelet problem: May 28 22:24:20 old-k8s-version-137556 kubelet[738]: E0528 22:24:20.203798     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:22.083824 1540905 logs.go:138] Found kubelet problem: May 28 22:24:33 old-k8s-version-137556 kubelet[738]: E0528 22:24:33.708202     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.084544 1540905 logs.go:138] Found kubelet problem: May 28 22:24:38 old-k8s-version-137556 kubelet[738]: E0528 22:24:38.049446     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.084872 1540905 logs.go:138] Found kubelet problem: May 28 22:24:39 old-k8s-version-137556 kubelet[738]: E0528 22:24:39.048542     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.085200 1540905 logs.go:138] Found kubelet problem: May 28 22:24:46 old-k8s-version-137556 kubelet[738]: E0528 22:24:46.690430     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.087272 1540905 logs.go:138] Found kubelet problem: May 28 22:24:48 old-k8s-version-137556 kubelet[738]: E0528 22:24:48.729167     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:22.087463 1540905 logs.go:138] Found kubelet problem: May 28 22:25:00 old-k8s-version-137556 kubelet[738]: E0528 22:25:00.705605     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.088052 1540905 logs.go:138] Found kubelet problem: May 28 22:25:02 old-k8s-version-137556 kubelet[738]: E0528 22:25:02.084189     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.088378 1540905 logs.go:138] Found kubelet problem: May 28 22:25:06 old-k8s-version-137556 kubelet[738]: E0528 22:25:06.690524     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.088562 1540905 logs.go:138] Found kubelet problem: May 28 22:25:13 old-k8s-version-137556 kubelet[738]: E0528 22:25:13.705095     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.088886 1540905 logs.go:138] Found kubelet problem: May 28 22:25:19 old-k8s-version-137556 kubelet[738]: E0528 22:25:19.704584     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.089072 1540905 logs.go:138] Found kubelet problem: May 28 22:25:24 old-k8s-version-137556 kubelet[738]: E0528 22:25:24.704685     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.089655 1540905 logs.go:138] Found kubelet problem: May 28 22:25:35 old-k8s-version-137556 kubelet[738]: E0528 22:25:35.141913     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.091744 1540905 logs.go:138] Found kubelet problem: May 28 22:25:35 old-k8s-version-137556 kubelet[738]: E0528 22:25:35.718176     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:22.092073 1540905 logs.go:138] Found kubelet problem: May 28 22:25:36 old-k8s-version-137556 kubelet[738]: E0528 22:25:36.690449     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.092399 1540905 logs.go:138] Found kubelet problem: May 28 22:25:47 old-k8s-version-137556 kubelet[738]: E0528 22:25:47.705446     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.092585 1540905 logs.go:138] Found kubelet problem: May 28 22:25:49 old-k8s-version-137556 kubelet[738]: E0528 22:25:49.705980     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.092918 1540905 logs.go:138] Found kubelet problem: May 28 22:26:00 old-k8s-version-137556 kubelet[738]: E0528 22:26:00.704155     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.093104 1540905 logs.go:138] Found kubelet problem: May 28 22:26:01 old-k8s-version-137556 kubelet[738]: E0528 22:26:01.704811     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.093692 1540905 logs.go:138] Found kubelet problem: May 28 22:26:15 old-k8s-version-137556 kubelet[738]: E0528 22:26:15.207095     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.094030 1540905 logs.go:138] Found kubelet problem: May 28 22:26:16 old-k8s-version-137556 kubelet[738]: E0528 22:26:16.690320     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.094212 1540905 logs.go:138] Found kubelet problem: May 28 22:26:16 old-k8s-version-137556 kubelet[738]: E0528 22:26:16.704608     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.094541 1540905 logs.go:138] Found kubelet problem: May 28 22:26:28 old-k8s-version-137556 kubelet[738]: E0528 22:26:28.704143     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.094726 1540905 logs.go:138] Found kubelet problem: May 28 22:26:29 old-k8s-version-137556 kubelet[738]: E0528 22:26:29.704616     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.095052 1540905 logs.go:138] Found kubelet problem: May 28 22:26:39 old-k8s-version-137556 kubelet[738]: E0528 22:26:39.704876     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.095237 1540905 logs.go:138] Found kubelet problem: May 28 22:26:43 old-k8s-version-137556 kubelet[738]: E0528 22:26:43.705110     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.096084 1540905 logs.go:138] Found kubelet problem: May 28 22:26:53 old-k8s-version-137556 kubelet[738]: E0528 22:26:53.704310     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.096269 1540905 logs.go:138] Found kubelet problem: May 28 22:26:54 old-k8s-version-137556 kubelet[738]: E0528 22:26:54.704655     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.096595 1540905 logs.go:138] Found kubelet problem: May 28 22:27:06 old-k8s-version-137556 kubelet[738]: E0528 22:27:06.704128     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.098660 1540905 logs.go:138] Found kubelet problem: May 28 22:27:09 old-k8s-version-137556 kubelet[738]: E0528 22:27:09.715928     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:22.098988 1540905 logs.go:138] Found kubelet problem: May 28 22:27:17 old-k8s-version-137556 kubelet[738]: E0528 22:27:17.704167     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.099172 1540905 logs.go:138] Found kubelet problem: May 28 22:27:21 old-k8s-version-137556 kubelet[738]: E0528 22:27:21.704885     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.099502 1540905 logs.go:138] Found kubelet problem: May 28 22:27:30 old-k8s-version-137556 kubelet[738]: E0528 22:27:30.704161     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.099687 1540905 logs.go:138] Found kubelet problem: May 28 22:27:33 old-k8s-version-137556 kubelet[738]: E0528 22:27:33.704586     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.100291 1540905 logs.go:138] Found kubelet problem: May 28 22:27:44 old-k8s-version-137556 kubelet[738]: E0528 22:27:44.340886     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.100622 1540905 logs.go:138] Found kubelet problem: May 28 22:27:46 old-k8s-version-137556 kubelet[738]: E0528 22:27:46.690347     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.100807 1540905 logs.go:138] Found kubelet problem: May 28 22:27:48 old-k8s-version-137556 kubelet[738]: E0528 22:27:48.704625     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.101135 1540905 logs.go:138] Found kubelet problem: May 28 22:27:58 old-k8s-version-137556 kubelet[738]: E0528 22:27:58.704805     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.101320 1540905 logs.go:138] Found kubelet problem: May 28 22:28:00 old-k8s-version-137556 kubelet[738]: E0528 22:28:00.704891     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.101634 1540905 logs.go:138] Found kubelet problem: May 28 22:28:12 old-k8s-version-137556 kubelet[738]: E0528 22:28:12.705029     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.101831 1540905 logs.go:138] Found kubelet problem: May 28 22:28:12 old-k8s-version-137556 kubelet[738]: E0528 22:28:12.705742     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.102015 1540905 logs.go:138] Found kubelet problem: May 28 22:28:24 old-k8s-version-137556 kubelet[738]: E0528 22:28:24.704663     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.102358 1540905 logs.go:138] Found kubelet problem: May 28 22:28:26 old-k8s-version-137556 kubelet[738]: E0528 22:28:26.704162     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.102544 1540905 logs.go:138] Found kubelet problem: May 28 22:28:39 old-k8s-version-137556 kubelet[738]: E0528 22:28:39.704787     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.102873 1540905 logs.go:138] Found kubelet problem: May 28 22:28:40 old-k8s-version-137556 kubelet[738]: E0528 22:28:40.704255     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.103452 1540905 logs.go:138] Found kubelet problem: May 28 22:28:52 old-k8s-version-137556 kubelet[738]: E0528 22:28:52.704205     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.103640 1540905 logs.go:138] Found kubelet problem: May 28 22:28:52 old-k8s-version-137556 kubelet[738]: E0528 22:28:52.705255     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.103825 1540905 logs.go:138] Found kubelet problem: May 28 22:29:04 old-k8s-version-137556 kubelet[738]: E0528 22:29:04.704842     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.104151 1540905 logs.go:138] Found kubelet problem: May 28 22:29:07 old-k8s-version-137556 kubelet[738]: E0528 22:29:07.704319     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.104340 1540905 logs.go:138] Found kubelet problem: May 28 22:29:18 old-k8s-version-137556 kubelet[738]: E0528 22:29:18.704693     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.104665 1540905 logs.go:138] Found kubelet problem: May 28 22:29:20 old-k8s-version-137556 kubelet[738]: E0528 22:29:20.704192     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	I0528 22:29:22.104674 1540905 logs.go:123] Gathering logs for kube-proxy [7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b] ...
	I0528 22:29:22.104688 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b"
	I0528 22:29:22.141117 1540905 logs.go:123] Gathering logs for kubernetes-dashboard [bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527] ...
	I0528 22:29:22.141148 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527"
	I0528 22:29:22.180279 1540905 logs.go:123] Gathering logs for CRI-O ...
	I0528 22:29:22.180303 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 22:29:22.275290 1540905 logs.go:123] Gathering logs for container status ...
	I0528 22:29:22.275341 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 22:29:22.320717 1540905 out.go:304] Setting ErrFile to fd 2...
	I0528 22:29:22.320742 1540905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 22:29:22.320829 1540905 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0528 22:29:22.320847 1540905 out.go:239]   May 28 22:28:52 old-k8s-version-137556 kubelet[738]: E0528 22:28:52.705255     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 28 22:28:52 old-k8s-version-137556 kubelet[738]: E0528 22:28:52.705255     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.320988 1540905 out.go:239]   May 28 22:29:04 old-k8s-version-137556 kubelet[738]: E0528 22:29:04.704842     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 28 22:29:04 old-k8s-version-137556 kubelet[738]: E0528 22:29:04.704842     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.321016 1540905 out.go:239]   May 28 22:29:07 old-k8s-version-137556 kubelet[738]: E0528 22:29:07.704319     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	  May 28 22:29:07 old-k8s-version-137556 kubelet[738]: E0528 22:29:07.704319     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.321024 1540905 out.go:239]   May 28 22:29:18 old-k8s-version-137556 kubelet[738]: E0528 22:29:18.704693     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 28 22:29:18 old-k8s-version-137556 kubelet[738]: E0528 22:29:18.704693     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.321042 1540905 out.go:239]   May 28 22:29:20 old-k8s-version-137556 kubelet[738]: E0528 22:29:20.704192     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	  May 28 22:29:20 old-k8s-version-137556 kubelet[738]: E0528 22:29:20.704192     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	I0528 22:29:22.321050 1540905 out.go:304] Setting ErrFile to fd 2...
	I0528 22:29:22.321057 1540905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:29:32.323043 1540905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:29:32.335698 1540905 api_server.go:72] duration metric: took 5m48.896786162s to wait for apiserver process to appear ...
	I0528 22:29:32.335723 1540905 api_server.go:88] waiting for apiserver healthz status ...
	I0528 22:29:32.335762 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 22:29:32.335824 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 22:29:32.374464 1540905 cri.go:89] found id: "9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3"
	I0528 22:29:32.374485 1540905 cri.go:89] found id: ""
	I0528 22:29:32.374494 1540905 logs.go:276] 1 containers: [9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3]
	I0528 22:29:32.374556 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.378665 1540905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 22:29:32.378735 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 22:29:32.428280 1540905 cri.go:89] found id: "c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8"
	I0528 22:29:32.428310 1540905 cri.go:89] found id: ""
	I0528 22:29:32.428319 1540905 logs.go:276] 1 containers: [c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8]
	I0528 22:29:32.428377 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.432088 1540905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 22:29:32.432153 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 22:29:32.474292 1540905 cri.go:89] found id: "cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a"
	I0528 22:29:32.474312 1540905 cri.go:89] found id: ""
	I0528 22:29:32.474319 1540905 logs.go:276] 1 containers: [cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a]
	I0528 22:29:32.474376 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.477891 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 22:29:32.477960 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 22:29:32.521895 1540905 cri.go:89] found id: "b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105"
	I0528 22:29:32.521917 1540905 cri.go:89] found id: ""
	I0528 22:29:32.521925 1540905 logs.go:276] 1 containers: [b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105]
	I0528 22:29:32.521982 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.526385 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 22:29:32.526465 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 22:29:32.568221 1540905 cri.go:89] found id: "7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b"
	I0528 22:29:32.568241 1540905 cri.go:89] found id: ""
	I0528 22:29:32.568249 1540905 logs.go:276] 1 containers: [7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b]
	I0528 22:29:32.568304 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.571975 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 22:29:32.572079 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 22:29:32.609486 1540905 cri.go:89] found id: "41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29"
	I0528 22:29:32.609510 1540905 cri.go:89] found id: ""
	I0528 22:29:32.609517 1540905 logs.go:276] 1 containers: [41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29]
	I0528 22:29:32.609574 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.612947 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 22:29:32.613016 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 22:29:32.652655 1540905 cri.go:89] found id: "dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5"
	I0528 22:29:32.652680 1540905 cri.go:89] found id: ""
	I0528 22:29:32.652689 1540905 logs.go:276] 1 containers: [dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5]
	I0528 22:29:32.652748 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.656315 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 22:29:32.656442 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 22:29:32.693909 1540905 cri.go:89] found id: "bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527"
	I0528 22:29:32.693941 1540905 cri.go:89] found id: ""
	I0528 22:29:32.693949 1540905 logs.go:276] 1 containers: [bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527]
	I0528 22:29:32.694072 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.697734 1540905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 22:29:32.697815 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 22:29:32.736027 1540905 cri.go:89] found id: "4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33"
	I0528 22:29:32.736053 1540905 cri.go:89] found id: "05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998"
	I0528 22:29:32.736058 1540905 cri.go:89] found id: ""
	I0528 22:29:32.736065 1540905 logs.go:276] 2 containers: [4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33 05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998]
	I0528 22:29:32.736150 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.742328 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.746335 1540905 logs.go:123] Gathering logs for storage-provisioner [4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33] ...
	I0528 22:29:32.746363 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33"
	I0528 22:29:32.783928 1540905 logs.go:123] Gathering logs for storage-provisioner [05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998] ...
	I0528 22:29:32.783955 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998"
	I0528 22:29:32.821570 1540905 logs.go:123] Gathering logs for CRI-O ...
	I0528 22:29:32.821598 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 22:29:32.907550 1540905 logs.go:123] Gathering logs for kubernetes-dashboard [bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527] ...
	I0528 22:29:32.907629 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527"
	I0528 22:29:32.952689 1540905 logs.go:123] Gathering logs for dmesg ...
	I0528 22:29:32.952720 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 22:29:32.971568 1540905 logs.go:123] Gathering logs for etcd [c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8] ...
	I0528 22:29:32.971598 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8"
	I0528 22:29:33.039002 1540905 logs.go:123] Gathering logs for kube-scheduler [b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105] ...
	I0528 22:29:33.039034 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105"
	I0528 22:29:33.090654 1540905 logs.go:123] Gathering logs for kube-proxy [7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b] ...
	I0528 22:29:33.090687 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b"
	I0528 22:29:33.136556 1540905 logs.go:123] Gathering logs for kindnet [dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5] ...
	I0528 22:29:33.136585 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5"
	I0528 22:29:33.192155 1540905 logs.go:123] Gathering logs for kubelet ...
	I0528 22:29:33.192228 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 22:29:33.246432 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.186601     738 reflector.go:138] object-"kube-system"/"kindnet-token-ztnvd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ztnvd" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.246672 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.187153     738 reflector.go:138] object-"kube-system"/"kube-proxy-token-cs2pc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-cs2pc" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.246887 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.187714     738 reflector.go:138] object-"default"/"default-token-k8l4j": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-k8l4j" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.247119 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.194216     738 reflector.go:138] object-"kube-system"/"storage-provisioner-token-cswbw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-cswbw" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.247320 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.199146     738 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.247555 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.199216     738 reflector.go:138] object-"kube-system"/"coredns-token-q7qdg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-q7qdg" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.247760 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.199262     738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.252483 1540905 logs.go:138] Found kubelet problem: May 28 22:24:04 old-k8s-version-137556 kubelet[738]: E0528 22:24:04.243616     738 reflector.go:138] object-"kube-system"/"metrics-server-token-42d6m": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-42d6m" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.256917 1540905 logs.go:138] Found kubelet problem: May 28 22:24:06 old-k8s-version-137556 kubelet[738]: E0528 22:24:06.119557     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:33.257113 1540905 logs.go:138] Found kubelet problem: May 28 22:24:06 old-k8s-version-137556 kubelet[738]: E0528 22:24:06.781597     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.259198 1540905 logs.go:138] Found kubelet problem: May 28 22:24:20 old-k8s-version-137556 kubelet[738]: E0528 22:24:20.203798     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:33.260974 1540905 logs.go:138] Found kubelet problem: May 28 22:24:33 old-k8s-version-137556 kubelet[738]: E0528 22:24:33.708202     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.261692 1540905 logs.go:138] Found kubelet problem: May 28 22:24:38 old-k8s-version-137556 kubelet[738]: E0528 22:24:38.049446     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.262026 1540905 logs.go:138] Found kubelet problem: May 28 22:24:39 old-k8s-version-137556 kubelet[738]: E0528 22:24:39.048542     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.262358 1540905 logs.go:138] Found kubelet problem: May 28 22:24:46 old-k8s-version-137556 kubelet[738]: E0528 22:24:46.690430     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.265262 1540905 logs.go:138] Found kubelet problem: May 28 22:24:48 old-k8s-version-137556 kubelet[738]: E0528 22:24:48.729167     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:33.265472 1540905 logs.go:138] Found kubelet problem: May 28 22:25:00 old-k8s-version-137556 kubelet[738]: E0528 22:25:00.705605     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.266085 1540905 logs.go:138] Found kubelet problem: May 28 22:25:02 old-k8s-version-137556 kubelet[738]: E0528 22:25:02.084189     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.266416 1540905 logs.go:138] Found kubelet problem: May 28 22:25:06 old-k8s-version-137556 kubelet[738]: E0528 22:25:06.690524     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.266603 1540905 logs.go:138] Found kubelet problem: May 28 22:25:13 old-k8s-version-137556 kubelet[738]: E0528 22:25:13.705095     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.266933 1540905 logs.go:138] Found kubelet problem: May 28 22:25:19 old-k8s-version-137556 kubelet[738]: E0528 22:25:19.704584     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.267118 1540905 logs.go:138] Found kubelet problem: May 28 22:25:24 old-k8s-version-137556 kubelet[738]: E0528 22:25:24.704685     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.267735 1540905 logs.go:138] Found kubelet problem: May 28 22:25:35 old-k8s-version-137556 kubelet[738]: E0528 22:25:35.141913     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.269944 1540905 logs.go:138] Found kubelet problem: May 28 22:25:35 old-k8s-version-137556 kubelet[738]: E0528 22:25:35.718176     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:33.270294 1540905 logs.go:138] Found kubelet problem: May 28 22:25:36 old-k8s-version-137556 kubelet[738]: E0528 22:25:36.690449     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.270624 1540905 logs.go:138] Found kubelet problem: May 28 22:25:47 old-k8s-version-137556 kubelet[738]: E0528 22:25:47.705446     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.271334 1540905 logs.go:138] Found kubelet problem: May 28 22:25:49 old-k8s-version-137556 kubelet[738]: E0528 22:25:49.705980     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.271686 1540905 logs.go:138] Found kubelet problem: May 28 22:26:00 old-k8s-version-137556 kubelet[738]: E0528 22:26:00.704155     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.271875 1540905 logs.go:138] Found kubelet problem: May 28 22:26:01 old-k8s-version-137556 kubelet[738]: E0528 22:26:01.704811     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.272461 1540905 logs.go:138] Found kubelet problem: May 28 22:26:15 old-k8s-version-137556 kubelet[738]: E0528 22:26:15.207095     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.272794 1540905 logs.go:138] Found kubelet problem: May 28 22:26:16 old-k8s-version-137556 kubelet[738]: E0528 22:26:16.690320     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.272980 1540905 logs.go:138] Found kubelet problem: May 28 22:26:16 old-k8s-version-137556 kubelet[738]: E0528 22:26:16.704608     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.273349 1540905 logs.go:138] Found kubelet problem: May 28 22:26:28 old-k8s-version-137556 kubelet[738]: E0528 22:26:28.704143     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.273541 1540905 logs.go:138] Found kubelet problem: May 28 22:26:29 old-k8s-version-137556 kubelet[738]: E0528 22:26:29.704616     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.273874 1540905 logs.go:138] Found kubelet problem: May 28 22:26:39 old-k8s-version-137556 kubelet[738]: E0528 22:26:39.704876     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.274074 1540905 logs.go:138] Found kubelet problem: May 28 22:26:43 old-k8s-version-137556 kubelet[738]: E0528 22:26:43.705110     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.274980 1540905 logs.go:138] Found kubelet problem: May 28 22:26:53 old-k8s-version-137556 kubelet[738]: E0528 22:26:53.704310     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.275170 1540905 logs.go:138] Found kubelet problem: May 28 22:26:54 old-k8s-version-137556 kubelet[738]: E0528 22:26:54.704655     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.275499 1540905 logs.go:138] Found kubelet problem: May 28 22:27:06 old-k8s-version-137556 kubelet[738]: E0528 22:27:06.704128     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.277543 1540905 logs.go:138] Found kubelet problem: May 28 22:27:09 old-k8s-version-137556 kubelet[738]: E0528 22:27:09.715928     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:33.277870 1540905 logs.go:138] Found kubelet problem: May 28 22:27:17 old-k8s-version-137556 kubelet[738]: E0528 22:27:17.704167     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.278074 1540905 logs.go:138] Found kubelet problem: May 28 22:27:21 old-k8s-version-137556 kubelet[738]: E0528 22:27:21.704885     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.278407 1540905 logs.go:138] Found kubelet problem: May 28 22:27:30 old-k8s-version-137556 kubelet[738]: E0528 22:27:30.704161     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.278671 1540905 logs.go:138] Found kubelet problem: May 28 22:27:33 old-k8s-version-137556 kubelet[738]: E0528 22:27:33.704586     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.279301 1540905 logs.go:138] Found kubelet problem: May 28 22:27:44 old-k8s-version-137556 kubelet[738]: E0528 22:27:44.340886     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.279669 1540905 logs.go:138] Found kubelet problem: May 28 22:27:46 old-k8s-version-137556 kubelet[738]: E0528 22:27:46.690347     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.279882 1540905 logs.go:138] Found kubelet problem: May 28 22:27:48 old-k8s-version-137556 kubelet[738]: E0528 22:27:48.704625     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.280239 1540905 logs.go:138] Found kubelet problem: May 28 22:27:58 old-k8s-version-137556 kubelet[738]: E0528 22:27:58.704805     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.280461 1540905 logs.go:138] Found kubelet problem: May 28 22:28:00 old-k8s-version-137556 kubelet[738]: E0528 22:28:00.704891     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.280810 1540905 logs.go:138] Found kubelet problem: May 28 22:28:12 old-k8s-version-137556 kubelet[738]: E0528 22:28:12.705029     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.281039 1540905 logs.go:138] Found kubelet problem: May 28 22:28:12 old-k8s-version-137556 kubelet[738]: E0528 22:28:12.705742     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.281253 1540905 logs.go:138] Found kubelet problem: May 28 22:28:24 old-k8s-version-137556 kubelet[738]: E0528 22:28:24.704663     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.281601 1540905 logs.go:138] Found kubelet problem: May 28 22:28:26 old-k8s-version-137556 kubelet[738]: E0528 22:28:26.704162     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.281815 1540905 logs.go:138] Found kubelet problem: May 28 22:28:39 old-k8s-version-137556 kubelet[738]: E0528 22:28:39.704787     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.282338 1540905 logs.go:138] Found kubelet problem: May 28 22:28:40 old-k8s-version-137556 kubelet[738]: E0528 22:28:40.704255     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.282977 1540905 logs.go:138] Found kubelet problem: May 28 22:28:52 old-k8s-version-137556 kubelet[738]: E0528 22:28:52.704205     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.283198 1540905 logs.go:138] Found kubelet problem: May 28 22:28:52 old-k8s-version-137556 kubelet[738]: E0528 22:28:52.705255     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.283416 1540905 logs.go:138] Found kubelet problem: May 28 22:29:04 old-k8s-version-137556 kubelet[738]: E0528 22:29:04.704842     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.284178 1540905 logs.go:138] Found kubelet problem: May 28 22:29:07 old-k8s-version-137556 kubelet[738]: E0528 22:29:07.704319     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.284427 1540905 logs.go:138] Found kubelet problem: May 28 22:29:18 old-k8s-version-137556 kubelet[738]: E0528 22:29:18.704693     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.284797 1540905 logs.go:138] Found kubelet problem: May 28 22:29:20 old-k8s-version-137556 kubelet[738]: E0528 22:29:20.704192     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.285031 1540905 logs.go:138] Found kubelet problem: May 28 22:29:29 old-k8s-version-137556 kubelet[738]: E0528 22:29:29.704635     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0528 22:29:33.285058 1540905 logs.go:123] Gathering logs for kube-apiserver [9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3] ...
	I0528 22:29:33.285090 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3"
	I0528 22:29:33.358393 1540905 logs.go:123] Gathering logs for container status ...
	I0528 22:29:33.358434 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 22:29:33.410155 1540905 logs.go:123] Gathering logs for describe nodes ...
	I0528 22:29:33.410232 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 22:29:33.558604 1540905 logs.go:123] Gathering logs for coredns [cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a] ...
	I0528 22:29:33.558637 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a"
	I0528 22:29:33.598686 1540905 logs.go:123] Gathering logs for kube-controller-manager [41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29] ...
	I0528 22:29:33.598716 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29"
	I0528 22:29:33.670199 1540905 out.go:304] Setting ErrFile to fd 2...
	I0528 22:29:33.670230 1540905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 22:29:33.670308 1540905 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0528 22:29:33.670324 1540905 out.go:239]   May 28 22:29:04 old-k8s-version-137556 kubelet[738]: E0528 22:29:04.704842     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 28 22:29:04 old-k8s-version-137556 kubelet[738]: E0528 22:29:04.704842     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.670455 1540905 out.go:239]   May 28 22:29:07 old-k8s-version-137556 kubelet[738]: E0528 22:29:07.704319     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	  May 28 22:29:07 old-k8s-version-137556 kubelet[738]: E0528 22:29:07.704319     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.670464 1540905 out.go:239]   May 28 22:29:18 old-k8s-version-137556 kubelet[738]: E0528 22:29:18.704693     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 28 22:29:18 old-k8s-version-137556 kubelet[738]: E0528 22:29:18.704693     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.670488 1540905 out.go:239]   May 28 22:29:20 old-k8s-version-137556 kubelet[738]: E0528 22:29:20.704192     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	  May 28 22:29:20 old-k8s-version-137556 kubelet[738]: E0528 22:29:20.704192     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.670497 1540905 out.go:239]   May 28 22:29:29 old-k8s-version-137556 kubelet[738]: E0528 22:29:29.704635     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 28 22:29:29 old-k8s-version-137556 kubelet[738]: E0528 22:29:29.704635     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0528 22:29:33.670507 1540905 out.go:304] Setting ErrFile to fd 2...
	I0528 22:29:33.670514 1540905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:29:43.671573 1540905 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0528 22:29:43.691524 1540905 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0528 22:29:43.693704 1540905 out.go:177] 
	W0528 22:29:43.696067 1540905 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0528 22:29:43.696109 1540905 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0528 22:29:43.696128 1540905 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0528 22:29:43.696133 1540905 out.go:239] * 
	* 
	W0528 22:29:43.697172 1540905 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 22:29:43.699282 1540905 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-137556 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-137556
helpers_test.go:235: (dbg) docker inspect old-k8s-version-137556:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "34fd7d98b135c1134fea5ddc99ba08b42735c00cc3f7217c47fa37b68a70a592",
	        "Created": "2024-05-28T22:20:30.593486142Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1541142,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-28T22:23:35.236400801Z",
	            "FinishedAt": "2024-05-28T22:23:33.929963613Z"
	        },
	        "Image": "sha256:acea75078737755d2f999491dfa245ea1d1040bffc73283b8c9ba9ff1fde89b5",
	        "ResolvConfPath": "/var/lib/docker/containers/34fd7d98b135c1134fea5ddc99ba08b42735c00cc3f7217c47fa37b68a70a592/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/34fd7d98b135c1134fea5ddc99ba08b42735c00cc3f7217c47fa37b68a70a592/hostname",
	        "HostsPath": "/var/lib/docker/containers/34fd7d98b135c1134fea5ddc99ba08b42735c00cc3f7217c47fa37b68a70a592/hosts",
	        "LogPath": "/var/lib/docker/containers/34fd7d98b135c1134fea5ddc99ba08b42735c00cc3f7217c47fa37b68a70a592/34fd7d98b135c1134fea5ddc99ba08b42735c00cc3f7217c47fa37b68a70a592-json.log",
	        "Name": "/old-k8s-version-137556",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-137556:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-137556",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dcb7b39eec5c282cf08db0a244cb2bbb481d91d8b22f64c75f2fd4605431dc6c-init/diff:/var/lib/docker/overlay2/41cb90b313a958e97d6c40ed76425369b134e98a770fd8f601707592b588c01d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dcb7b39eec5c282cf08db0a244cb2bbb481d91d8b22f64c75f2fd4605431dc6c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dcb7b39eec5c282cf08db0a244cb2bbb481d91d8b22f64c75f2fd4605431dc6c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dcb7b39eec5c282cf08db0a244cb2bbb481d91d8b22f64c75f2fd4605431dc6c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-137556",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-137556/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-137556",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-137556",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-137556",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6dcf91602b61d4206e9bbc36d4a140d5195f1b3ee4ffb18922d9a3a883ee499f",
	            "SandboxKey": "/var/run/docker/netns/6dcf91602b61",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34589"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34588"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34585"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34587"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34586"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-137556": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "06df03bcb9a730cd78ebc03cfb3323e1208621b10bd8b73f11e4f24de9725707",
	                    "EndpointID": "1ac3e023546291c4b1869fafc48bb4a7a3b4b2df3704ad860b9d8d0feaac0750",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-137556",
	                        "34fd7d98b135"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-137556 -n old-k8s-version-137556
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-137556 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-137556 logs -n 25: (2.090093681s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-982195 sudo find                             | cilium-982195             | jenkins | v1.33.1 | 28 May 24 22:18 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-982195 sudo crio                             | cilium-982195             | jenkins | v1.33.1 | 28 May 24 22:18 UTC |                     |
	|         | config                                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-982195                                       | cilium-982195             | jenkins | v1.33.1 | 28 May 24 22:18 UTC | 28 May 24 22:18 UTC |
	| start   | -p force-systemd-env-959636                            | force-systemd-env-959636  | jenkins | v1.33.1 | 28 May 24 22:18 UTC | 28 May 24 22:19 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-510763                           | kubernetes-upgrade-510763 | jenkins | v1.33.1 | 28 May 24 22:19 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-510763                           | kubernetes-upgrade-510763 | jenkins | v1.33.1 | 28 May 24 22:19 UTC | 28 May 24 22:19 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-510763                           | kubernetes-upgrade-510763 | jenkins | v1.33.1 | 28 May 24 22:19 UTC | 28 May 24 22:19 UTC |
	| start   | -p cert-expiration-511332                              | cert-expiration-511332    | jenkins | v1.33.1 | 28 May 24 22:19 UTC | 28 May 24 22:20 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-959636                            | force-systemd-env-959636  | jenkins | v1.33.1 | 28 May 24 22:19 UTC | 28 May 24 22:19 UTC |
	| start   | -p cert-options-768265                                 | cert-options-768265       | jenkins | v1.33.1 | 28 May 24 22:19 UTC | 28 May 24 22:20 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| ssh     | cert-options-768265 ssh                                | cert-options-768265       | jenkins | v1.33.1 | 28 May 24 22:20 UTC | 28 May 24 22:20 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-768265 -- sudo                         | cert-options-768265       | jenkins | v1.33.1 | 28 May 24 22:20 UTC | 28 May 24 22:20 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-768265                                 | cert-options-768265       | jenkins | v1.33.1 | 28 May 24 22:20 UTC | 28 May 24 22:20 UTC |
	| start   | -p old-k8s-version-137556                              | old-k8s-version-137556    | jenkins | v1.33.1 | 28 May 24 22:20 UTC | 28 May 24 22:23 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-511332                              | cert-expiration-511332    | jenkins | v1.33.1 | 28 May 24 22:23 UTC | 28 May 24 22:23 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-137556        | old-k8s-version-137556    | jenkins | v1.33.1 | 28 May 24 22:23 UTC | 28 May 24 22:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-137556                              | old-k8s-version-137556    | jenkins | v1.33.1 | 28 May 24 22:23 UTC | 28 May 24 22:23 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-511332                              | cert-expiration-511332    | jenkins | v1.33.1 | 28 May 24 22:23 UTC | 28 May 24 22:23 UTC |
	| start   | -p no-preload-264173                                   | no-preload-264173         | jenkins | v1.33.1 | 28 May 24 22:23 UTC | 28 May 24 22:24 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-137556             | old-k8s-version-137556    | jenkins | v1.33.1 | 28 May 24 22:23 UTC | 28 May 24 22:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-137556                              | old-k8s-version-137556    | jenkins | v1.33.1 | 28 May 24 22:23 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-264173             | no-preload-264173         | jenkins | v1.33.1 | 28 May 24 22:24 UTC | 28 May 24 22:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-264173                                   | no-preload-264173         | jenkins | v1.33.1 | 28 May 24 22:24 UTC | 28 May 24 22:25 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-264173                  | no-preload-264173         | jenkins | v1.33.1 | 28 May 24 22:25 UTC | 28 May 24 22:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-264173                                   | no-preload-264173         | jenkins | v1.33.1 | 28 May 24 22:25 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 22:25:00
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 22:25:00.898762 1546162 out.go:291] Setting OutFile to fd 1 ...
	I0528 22:25:00.899001 1546162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:25:00.899016 1546162 out.go:304] Setting ErrFile to fd 2...
	I0528 22:25:00.899021 1546162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:25:00.899325 1546162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
	I0528 22:25:00.899747 1546162 out.go:298] Setting JSON to false
	I0528 22:25:00.900812 1546162 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22049,"bootTime":1716913052,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0528 22:25:00.900891 1546162 start.go:139] virtualization:  
	I0528 22:25:00.905505 1546162 out.go:177] * [no-preload-264173] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0528 22:25:00.907719 1546162 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 22:25:00.910487 1546162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 22:25:00.907774 1546162 notify.go:220] Checking for updates...
	I0528 22:25:00.913241 1546162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	I0528 22:25:00.915438 1546162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	I0528 22:25:00.917565 1546162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0528 22:25:00.919862 1546162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 22:25:00.922785 1546162 config.go:182] Loaded profile config "no-preload-264173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:25:00.923331 1546162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 22:25:00.948567 1546162 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 22:25:00.948717 1546162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 22:25:01.021288 1546162 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-28 22:25:01.011657777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 22:25:01.021402 1546162 docker.go:295] overlay module found
	I0528 22:25:01.024744 1546162 out.go:177] * Using the docker driver based on existing profile
	I0528 22:25:01.027147 1546162 start.go:297] selected driver: docker
	I0528 22:25:01.027173 1546162 start.go:901] validating driver "docker" against &{Name:no-preload-264173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-264173 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:25:01.027374 1546162 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 22:25:01.028059 1546162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 22:25:01.079767 1546162 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-28 22:25:01.069750615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 22:25:01.080157 1546162 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 22:25:01.080187 1546162 cni.go:84] Creating CNI manager for ""
	I0528 22:25:01.080195 1546162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0528 22:25:01.080254 1546162 start.go:340] cluster config:
	{Name:no-preload-264173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-264173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:25:01.082911 1546162 out.go:177] * Starting "no-preload-264173" primary control-plane node in "no-preload-264173" cluster
	I0528 22:25:01.085222 1546162 cache.go:121] Beginning downloading kic base image for docker with crio
	I0528 22:25:01.087522 1546162 out.go:177] * Pulling base image v0.0.44-1716228441-18934 ...
	I0528 22:25:01.090440 1546162 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon
	I0528 22:25:01.090629 1546162 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 22:25:01.090768 1546162 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/config.json ...
	I0528 22:25:01.091081 1546162 cache.go:107] acquiring lock: {Name:mk657eded7f3162e770147cb0b4b3152fb7e76ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:25:01.091163 1546162 cache.go:115] /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0528 22:25:01.091177 1546162 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 100.732µs
	I0528 22:25:01.091194 1546162 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0528 22:25:01.091303 1546162 cache.go:107] acquiring lock: {Name:mkab6f49139af1588edb99533d7df58a239cfcf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:25:01.091364 1546162 cache.go:115] /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0528 22:25:01.091373 1546162 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1" took 74.944µs
	I0528 22:25:01.091381 1546162 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0528 22:25:01.091393 1546162 cache.go:107] acquiring lock: {Name:mk2c7d6b1c9426f8d4fbc55d66c0bae5158c4382 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:25:01.091431 1546162 cache.go:115] /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0528 22:25:01.091440 1546162 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1" took 48.835µs
	I0528 22:25:01.091446 1546162 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0528 22:25:01.091456 1546162 cache.go:107] acquiring lock: {Name:mkb8d69f358bc4da52023bf1814c5a76fe601c4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:25:01.091500 1546162 cache.go:115] /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0528 22:25:01.091510 1546162 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1" took 55.891µs
	I0528 22:25:01.091517 1546162 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0528 22:25:01.091526 1546162 cache.go:107] acquiring lock: {Name:mkd1d0631bf4ec5aa92850f985fdd52f6d1b897a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:25:01.091556 1546162 cache.go:115] /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0528 22:25:01.091561 1546162 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1" took 36.167µs
	I0528 22:25:01.091571 1546162 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0528 22:25:01.091585 1546162 cache.go:107] acquiring lock: {Name:mk8e65c5299968149ba0b83461da6df903fe8422 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:25:01.091620 1546162 cache.go:115] /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0528 22:25:01.091629 1546162 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 47.646µs
	I0528 22:25:01.091635 1546162 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0528 22:25:01.091647 1546162 cache.go:107] acquiring lock: {Name:mk4c83f437e70c0f7f8995f5f4a8d6feb99324c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:25:01.091680 1546162 cache.go:115] /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0528 22:25:01.091690 1546162 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 43.708µs
	I0528 22:25:01.091696 1546162 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0528 22:25:01.091711 1546162 cache.go:107] acquiring lock: {Name:mk52cbc84a8bf1a6423d99bba62b99f08b7b20c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:25:01.091743 1546162 cache.go:115] /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0528 22:25:01.091754 1546162 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 49.574µs
	I0528 22:25:01.091763 1546162 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0528 22:25:01.091770 1546162 cache.go:87] Successfully saved all images to host disk.
	I0528 22:25:01.111086 1546162 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon, skipping pull
	I0528 22:25:01.111117 1546162 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 exists in daemon, skipping load
	I0528 22:25:01.111139 1546162 cache.go:194] Successfully downloaded all kic artifacts
	I0528 22:25:01.111167 1546162 start.go:360] acquireMachinesLock for no-preload-264173: {Name:mkaefb5e133965f56a195fd090186d52ce52ef7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:25:01.111313 1546162 start.go:364] duration metric: took 124.608µs to acquireMachinesLock for "no-preload-264173"
	I0528 22:25:01.111341 1546162 start.go:96] Skipping create...Using existing machine configuration
	I0528 22:25:01.111359 1546162 fix.go:54] fixHost starting: 
	I0528 22:25:01.111650 1546162 cli_runner.go:164] Run: docker container inspect no-preload-264173 --format={{.State.Status}}
	I0528 22:25:01.127574 1546162 fix.go:112] recreateIfNeeded on no-preload-264173: state=Stopped err=<nil>
	W0528 22:25:01.127608 1546162 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 22:25:01.130517 1546162 out.go:177] * Restarting existing docker container for "no-preload-264173" ...
	I0528 22:25:01.446499 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:03.946841 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:01.132971 1546162 cli_runner.go:164] Run: docker start no-preload-264173
	I0528 22:25:01.479923 1546162 cli_runner.go:164] Run: docker container inspect no-preload-264173 --format={{.State.Status}}
	I0528 22:25:01.505196 1546162 kic.go:430] container "no-preload-264173" state is running.
	I0528 22:25:01.505737 1546162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-264173
	I0528 22:25:01.536246 1546162 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/config.json ...
	I0528 22:25:01.536474 1546162 machine.go:94] provisionDockerMachine start ...
	I0528 22:25:01.536541 1546162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-264173
	I0528 22:25:01.562236 1546162 main.go:141] libmachine: Using SSH client type: native
	I0528 22:25:01.562867 1546162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34594 <nil> <nil>}
	I0528 22:25:01.562886 1546162 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 22:25:01.563784 1546162 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0528 22:25:04.685817 1546162 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-264173
	
	I0528 22:25:04.685841 1546162 ubuntu.go:169] provisioning hostname "no-preload-264173"
	I0528 22:25:04.685915 1546162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-264173
	I0528 22:25:04.705801 1546162 main.go:141] libmachine: Using SSH client type: native
	I0528 22:25:04.706131 1546162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34594 <nil> <nil>}
	I0528 22:25:04.706151 1546162 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-264173 && echo "no-preload-264173" | sudo tee /etc/hostname
	I0528 22:25:04.842429 1546162 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-264173
	
	I0528 22:25:04.842510 1546162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-264173
	I0528 22:25:04.859309 1546162 main.go:141] libmachine: Using SSH client type: native
	I0528 22:25:04.859575 1546162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34594 <nil> <nil>}
	I0528 22:25:04.859605 1546162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-264173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-264173/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-264173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 22:25:04.987308 1546162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 22:25:04.987337 1546162 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18966-1349783/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-1349783/.minikube}
	I0528 22:25:04.987375 1546162 ubuntu.go:177] setting up certificates
	I0528 22:25:04.987387 1546162 provision.go:84] configureAuth start
	I0528 22:25:04.987471 1546162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-264173
	I0528 22:25:05.009521 1546162 provision.go:143] copyHostCerts
	I0528 22:25:05.009622 1546162 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.pem, removing ...
	I0528 22:25:05.009633 1546162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.pem
	I0528 22:25:05.009725 1546162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.pem (1082 bytes)
	I0528 22:25:05.009841 1546162 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-1349783/.minikube/cert.pem, removing ...
	I0528 22:25:05.009847 1546162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-1349783/.minikube/cert.pem
	I0528 22:25:05.009880 1546162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-1349783/.minikube/cert.pem (1123 bytes)
	I0528 22:25:05.009937 1546162 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-1349783/.minikube/key.pem, removing ...
	I0528 22:25:05.009942 1546162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-1349783/.minikube/key.pem
	I0528 22:25:05.009970 1546162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-1349783/.minikube/key.pem (1675 bytes)
	I0528 22:25:05.010056 1546162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca-key.pem org=jenkins.no-preload-264173 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-264173]
	I0528 22:25:05.347349 1546162 provision.go:177] copyRemoteCerts
	I0528 22:25:05.347425 1546162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 22:25:05.347473 1546162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-264173
	I0528 22:25:05.367428 1546162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/no-preload-264173/id_rsa Username:docker}
	I0528 22:25:05.471164 1546162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 22:25:05.498236 1546162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0528 22:25:05.523196 1546162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 22:25:05.548839 1546162 provision.go:87] duration metric: took 561.434071ms to configureAuth
	I0528 22:25:05.548865 1546162 ubuntu.go:193] setting minikube options for container-runtime
	I0528 22:25:05.549096 1546162 config.go:182] Loaded profile config "no-preload-264173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:25:05.549217 1546162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-264173
	I0528 22:25:05.569989 1546162 main.go:141] libmachine: Using SSH client type: native
	I0528 22:25:05.570307 1546162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34594 <nil> <nil>}
	I0528 22:25:05.570333 1546162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 22:25:05.937666 1546162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 22:25:05.937745 1546162 machine.go:97] duration metric: took 4.401260705s to provisionDockerMachine
	I0528 22:25:05.937770 1546162 start.go:293] postStartSetup for "no-preload-264173" (driver="docker")
	I0528 22:25:05.937812 1546162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 22:25:05.937899 1546162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 22:25:05.937984 1546162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-264173
	I0528 22:25:05.962474 1546162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/no-preload-264173/id_rsa Username:docker}
	I0528 22:25:06.067602 1546162 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 22:25:06.071412 1546162 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0528 22:25:06.071453 1546162 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0528 22:25:06.071464 1546162 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0528 22:25:06.071471 1546162 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0528 22:25:06.071483 1546162 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1349783/.minikube/addons for local assets ...
	I0528 22:25:06.071556 1546162 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1349783/.minikube/files for local assets ...
	I0528 22:25:06.071666 1546162 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-1349783/.minikube/files/etc/ssl/certs/13551972.pem -> 13551972.pem in /etc/ssl/certs
	I0528 22:25:06.071784 1546162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 22:25:06.081082 1546162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/files/etc/ssl/certs/13551972.pem --> /etc/ssl/certs/13551972.pem (1708 bytes)
	I0528 22:25:06.110179 1546162 start.go:296] duration metric: took 172.363223ms for postStartSetup
	I0528 22:25:06.110269 1546162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 22:25:06.110341 1546162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-264173
	I0528 22:25:06.133986 1546162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/no-preload-264173/id_rsa Username:docker}
	I0528 22:25:06.223465 1546162 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0528 22:25:06.227993 1546162 fix.go:56] duration metric: took 5.116634799s for fixHost
	I0528 22:25:06.228019 1546162 start.go:83] releasing machines lock for "no-preload-264173", held for 5.116690675s
	I0528 22:25:06.228111 1546162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-264173
	I0528 22:25:06.248484 1546162 ssh_runner.go:195] Run: cat /version.json
	I0528 22:25:06.248505 1546162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 22:25:06.248540 1546162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-264173
	I0528 22:25:06.248563 1546162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-264173
	I0528 22:25:06.270348 1546162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/no-preload-264173/id_rsa Username:docker}
	I0528 22:25:06.293219 1546162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/no-preload-264173/id_rsa Username:docker}
	I0528 22:25:06.505358 1546162 ssh_runner.go:195] Run: systemctl --version
	I0528 22:25:06.510168 1546162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 22:25:06.652833 1546162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 22:25:06.658624 1546162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 22:25:06.667683 1546162 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0528 22:25:06.667766 1546162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 22:25:06.676858 1546162 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0528 22:25:06.676882 1546162 start.go:494] detecting cgroup driver to use...
	I0528 22:25:06.676915 1546162 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0528 22:25:06.676963 1546162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 22:25:06.689745 1546162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 22:25:06.708372 1546162 docker.go:217] disabling cri-docker service (if available) ...
	I0528 22:25:06.708438 1546162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 22:25:06.724636 1546162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 22:25:06.737641 1546162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 22:25:06.831442 1546162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 22:25:06.924316 1546162 docker.go:233] disabling docker service ...
	I0528 22:25:06.924427 1546162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 22:25:06.937913 1546162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 22:25:06.950064 1546162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 22:25:07.046458 1546162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 22:25:07.133831 1546162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 22:25:07.146048 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 22:25:07.162493 1546162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 22:25:07.162582 1546162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:25:07.173269 1546162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 22:25:07.173387 1546162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:25:07.183533 1546162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:25:07.195859 1546162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:25:07.206858 1546162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 22:25:07.217916 1546162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:25:07.231541 1546162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:25:07.242577 1546162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:25:07.256166 1546162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 22:25:07.267876 1546162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 22:25:07.277133 1546162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:25:07.359288 1546162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 22:25:07.508652 1546162 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 22:25:07.508776 1546162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 22:25:07.514583 1546162 start.go:562] Will wait 60s for crictl version
	I0528 22:25:07.514677 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:25:07.519671 1546162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 22:25:07.563535 1546162 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0528 22:25:07.563646 1546162 ssh_runner.go:195] Run: crio --version
	I0528 22:25:07.620626 1546162 ssh_runner.go:195] Run: crio --version
	I0528 22:25:07.672109 1546162 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.24.6 ...
	I0528 22:25:07.674105 1546162 cli_runner.go:164] Run: docker network inspect no-preload-264173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0528 22:25:07.688481 1546162 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0528 22:25:07.692459 1546162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 22:25:07.703331 1546162 kubeadm.go:877] updating cluster {Name:no-preload-264173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-264173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 22:25:07.703488 1546162 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 22:25:07.703536 1546162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 22:25:07.751495 1546162 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 22:25:07.751521 1546162 cache_images.go:84] Images are preloaded, skipping loading
	I0528 22:25:07.751530 1546162 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.30.1 crio true true} ...
	I0528 22:25:07.751640 1546162 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-264173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-264173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 22:25:07.751735 1546162 ssh_runner.go:195] Run: crio config
	I0528 22:25:07.811781 1546162 cni.go:84] Creating CNI manager for ""
	I0528 22:25:07.811803 1546162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0528 22:25:07.811823 1546162 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 22:25:07.811849 1546162 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-264173 NodeName:no-preload-264173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 22:25:07.812016 1546162 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-264173"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 22:25:07.812080 1546162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 22:25:07.822063 1546162 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 22:25:07.822157 1546162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 22:25:07.831349 1546162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0528 22:25:07.851622 1546162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 22:25:07.871069 1546162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0528 22:25:07.889985 1546162 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0528 22:25:07.893859 1546162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 22:25:07.905704 1546162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:25:08.004215 1546162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:25:08.021541 1546162 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173 for IP: 192.168.76.2
	I0528 22:25:08.021561 1546162 certs.go:194] generating shared ca certs ...
	I0528 22:25:08.021577 1546162 certs.go:226] acquiring lock for ca certs: {Name:mk3b01431a293453662fa80a6161920f23c6c736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:25:08.021720 1546162 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.key
	I0528 22:25:08.021769 1546162 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.key
	I0528 22:25:08.021777 1546162 certs.go:256] generating profile certs ...
	I0528 22:25:08.021864 1546162 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.key
	I0528 22:25:08.021927 1546162 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/apiserver.key.b07e317e
	I0528 22:25:08.021970 1546162 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/proxy-client.key
	I0528 22:25:08.022116 1546162 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/1355197.pem (1338 bytes)
	W0528 22:25:08.022147 1546162 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/1355197_empty.pem, impossibly tiny 0 bytes
	I0528 22:25:08.022155 1546162 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca-key.pem (1679 bytes)
	I0528 22:25:08.022182 1546162 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/ca.pem (1082 bytes)
	I0528 22:25:08.022208 1546162 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/cert.pem (1123 bytes)
	I0528 22:25:08.022232 1546162 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/key.pem (1675 bytes)
	I0528 22:25:08.022277 1546162 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1349783/.minikube/files/etc/ssl/certs/13551972.pem (1708 bytes)
	I0528 22:25:08.022999 1546162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 22:25:08.075978 1546162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 22:25:08.123437 1546162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 22:25:08.160622 1546162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 22:25:08.198772 1546162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 22:25:08.229741 1546162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 22:25:08.257753 1546162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 22:25:08.283933 1546162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 22:25:08.312516 1546162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 22:25:08.337680 1546162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/certs/1355197.pem --> /usr/share/ca-certificates/1355197.pem (1338 bytes)
	I0528 22:25:08.363237 1546162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1349783/.minikube/files/etc/ssl/certs/13551972.pem --> /usr/share/ca-certificates/13551972.pem (1708 bytes)
	I0528 22:25:08.396228 1546162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 22:25:08.415342 1546162 ssh_runner.go:195] Run: openssl version
	I0528 22:25:08.422720 1546162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 22:25:08.433235 1546162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:25:08.436946 1546162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 21:31 /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:25:08.437066 1546162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:25:08.445114 1546162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 22:25:08.454693 1546162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355197.pem && ln -fs /usr/share/ca-certificates/1355197.pem /etc/ssl/certs/1355197.pem"
	I0528 22:25:08.465403 1546162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355197.pem
	I0528 22:25:08.469049 1546162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 21:42 /usr/share/ca-certificates/1355197.pem
	I0528 22:25:08.469116 1546162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355197.pem
	I0528 22:25:08.477653 1546162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1355197.pem /etc/ssl/certs/51391683.0"
	I0528 22:25:08.487180 1546162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13551972.pem && ln -fs /usr/share/ca-certificates/13551972.pem /etc/ssl/certs/13551972.pem"
	I0528 22:25:08.497016 1546162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13551972.pem
	I0528 22:25:08.500661 1546162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 21:42 /usr/share/ca-certificates/13551972.pem
	I0528 22:25:08.500723 1546162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13551972.pem
	I0528 22:25:08.507991 1546162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13551972.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 22:25:08.517692 1546162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 22:25:08.521329 1546162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 22:25:08.528660 1546162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 22:25:08.535897 1546162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 22:25:08.542992 1546162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 22:25:08.550139 1546162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 22:25:08.557349 1546162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 22:25:08.564735 1546162 kubeadm.go:391] StartCluster: {Name:no-preload-264173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-264173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:25:08.564830 1546162 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 22:25:08.564904 1546162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 22:25:08.620481 1546162 cri.go:89] found id: ""
	I0528 22:25:08.620600 1546162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0528 22:25:08.630312 1546162 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0528 22:25:08.630383 1546162 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0528 22:25:08.630400 1546162 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0528 22:25:08.630469 1546162 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0528 22:25:08.640220 1546162 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0528 22:25:08.640826 1546162 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-264173" does not appear in /home/jenkins/minikube-integration/18966-1349783/kubeconfig
	I0528 22:25:08.641086 1546162 kubeconfig.go:62] /home/jenkins/minikube-integration/18966-1349783/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-264173" cluster setting kubeconfig missing "no-preload-264173" context setting]
	I0528 22:25:08.641513 1546162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/kubeconfig: {Name:mkaf5e1534f034576a412c2bb12acf3530c82fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:25:08.643056 1546162 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0528 22:25:08.655555 1546162 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0528 22:25:08.655591 1546162 kubeadm.go:591] duration metric: took 25.174373ms to restartPrimaryControlPlane
	I0528 22:25:08.655600 1546162 kubeadm.go:393] duration metric: took 90.877005ms to StartCluster
	I0528 22:25:08.655619 1546162 settings.go:142] acquiring lock: {Name:mk3ead4661b05edfaa64061283a93c6a76969cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:25:08.655684 1546162 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-1349783/kubeconfig
	I0528 22:25:08.656588 1546162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1349783/kubeconfig: {Name:mkaf5e1534f034576a412c2bb12acf3530c82fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:25:08.656788 1546162 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 22:25:08.660717 1546162 out.go:177] * Verifying Kubernetes components...
	I0528 22:25:08.657077 1546162 config.go:182] Loaded profile config "no-preload-264173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:25:08.657088 1546162 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 22:25:08.660939 1546162 addons.go:69] Setting storage-provisioner=true in profile "no-preload-264173"
	I0528 22:25:08.663404 1546162 addons.go:234] Setting addon storage-provisioner=true in "no-preload-264173"
	W0528 22:25:08.663434 1546162 addons.go:243] addon storage-provisioner should already be in state true
	I0528 22:25:08.660947 1546162 addons.go:69] Setting dashboard=true in profile "no-preload-264173"
	I0528 22:25:08.663478 1546162 host.go:66] Checking if "no-preload-264173" exists ...
	I0528 22:25:08.663510 1546162 addons.go:234] Setting addon dashboard=true in "no-preload-264173"
	W0528 22:25:08.663613 1546162 addons.go:243] addon dashboard should already be in state true
	I0528 22:25:08.663652 1546162 host.go:66] Checking if "no-preload-264173" exists ...
	I0528 22:25:08.664065 1546162 cli_runner.go:164] Run: docker container inspect no-preload-264173 --format={{.State.Status}}
	I0528 22:25:08.664066 1546162 cli_runner.go:164] Run: docker container inspect no-preload-264173 --format={{.State.Status}}
	I0528 22:25:08.663330 1546162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:25:08.660953 1546162 addons.go:69] Setting default-storageclass=true in profile "no-preload-264173"
	I0528 22:25:08.664733 1546162 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-264173"
	I0528 22:25:08.660957 1546162 addons.go:69] Setting metrics-server=true in profile "no-preload-264173"
	I0528 22:25:08.664847 1546162 addons.go:234] Setting addon metrics-server=true in "no-preload-264173"
	W0528 22:25:08.664871 1546162 addons.go:243] addon metrics-server should already be in state true
	I0528 22:25:08.664916 1546162 host.go:66] Checking if "no-preload-264173" exists ...
	I0528 22:25:08.665015 1546162 cli_runner.go:164] Run: docker container inspect no-preload-264173 --format={{.State.Status}}
	I0528 22:25:08.665522 1546162 cli_runner.go:164] Run: docker container inspect no-preload-264173 --format={{.State.Status}}
	I0528 22:25:08.715457 1546162 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0528 22:25:08.717759 1546162 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0528 22:25:08.725329 1546162 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0528 22:25:08.726222 1546162 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0528 22:25:08.726289 1546162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-264173
	I0528 22:25:08.726876 1546162 addons.go:234] Setting addon default-storageclass=true in "no-preload-264173"
	W0528 22:25:08.726889 1546162 addons.go:243] addon default-storageclass should already be in state true
	I0528 22:25:08.726914 1546162 host.go:66] Checking if "no-preload-264173" exists ...
	I0528 22:25:08.727313 1546162 cli_runner.go:164] Run: docker container inspect no-preload-264173 --format={{.State.Status}}
	I0528 22:25:08.740002 1546162 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0528 22:25:08.742600 1546162 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0528 22:25:08.742625 1546162 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0528 22:25:08.742696 1546162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-264173
	I0528 22:25:08.765130 1546162 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 22:25:05.947874 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:08.447084 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:08.770460 1546162 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:25:08.770483 1546162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 22:25:08.770552 1546162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-264173
	I0528 22:25:08.773880 1546162 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 22:25:08.773900 1546162 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 22:25:08.773972 1546162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-264173
	I0528 22:25:08.792487 1546162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/no-preload-264173/id_rsa Username:docker}
	I0528 22:25:08.814209 1546162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/no-preload-264173/id_rsa Username:docker}
	I0528 22:25:08.857464 1546162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/no-preload-264173/id_rsa Username:docker}
	I0528 22:25:08.859844 1546162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/no-preload-264173/id_rsa Username:docker}
	I0528 22:25:09.072358 1546162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:25:09.074707 1546162 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0528 22:25:09.074779 1546162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0528 22:25:09.082521 1546162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:25:09.107035 1546162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:25:09.146834 1546162 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0528 22:25:09.146911 1546162 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0528 22:25:09.162374 1546162 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0528 22:25:09.162398 1546162 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0528 22:25:09.191483 1546162 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0528 22:25:09.191512 1546162 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0528 22:25:09.256341 1546162 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:25:09.256370 1546162 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0528 22:25:09.261003 1546162 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0528 22:25:09.261077 1546162 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0528 22:25:09.321530 1546162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:25:09.342437 1546162 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0528 22:25:09.342509 1546162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0528 22:25:09.375328 1546162 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 22:25:09.375413 1546162 retry.go:31] will retry after 295.051083ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 22:25:09.375479 1546162 node_ready.go:35] waiting up to 6m0s for node "no-preload-264173" to be "Ready" ...
	W0528 22:25:09.422268 1546162 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 22:25:09.422364 1546162 retry.go:31] will retry after 142.820685ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 22:25:09.427608 1546162 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0528 22:25:09.427683 1546162 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0528 22:25:09.505069 1546162 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0528 22:25:09.505147 1546162 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0528 22:25:09.523264 1546162 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 22:25:09.523357 1546162 retry.go:31] will retry after 276.038849ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 22:25:09.537624 1546162 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0528 22:25:09.537704 1546162 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0528 22:25:09.566228 1546162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:25:09.584457 1546162 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0528 22:25:09.584538 1546162 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0528 22:25:09.627557 1546162 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0528 22:25:09.627637 1546162 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0528 22:25:09.670839 1546162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:25:09.684946 1546162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0528 22:25:09.800323 1546162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:25:10.946031 1540905 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:11.951732 1540905 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"True"
	I0528 22:25:11.951759 1540905 pod_ready.go:81] duration metric: took 1m7.512242141s for pod "kube-controller-manager-old-k8s-version-137556" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:11.951771 1540905 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8jz6w" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:11.974855 1540905 pod_ready.go:92] pod "kube-proxy-8jz6w" in "kube-system" namespace has status "Ready":"True"
	I0528 22:25:11.974883 1540905 pod_ready.go:81] duration metric: took 23.104329ms for pod "kube-proxy-8jz6w" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:11.974920 1540905 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-137556" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:13.982194 1540905 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:13.672600 1546162 node_ready.go:49] node "no-preload-264173" has status "Ready":"True"
	I0528 22:25:13.672623 1546162 node_ready.go:38] duration metric: took 4.297112165s for node "no-preload-264173" to be "Ready" ...
	I0528 22:25:13.672632 1546162 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:25:13.805734 1546162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.239418709s)
	I0528 22:25:13.917810 1546162 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4l429" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:14.036539 1546162 pod_ready.go:92] pod "coredns-7db6d8ff4d-4l429" in "kube-system" namespace has status "Ready":"True"
	I0528 22:25:14.036611 1546162 pod_ready.go:81] duration metric: took 118.700904ms for pod "coredns-7db6d8ff4d-4l429" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:14.036639 1546162 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-264173" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:14.116019 1546162 pod_ready.go:92] pod "etcd-no-preload-264173" in "kube-system" namespace has status "Ready":"True"
	I0528 22:25:14.116089 1546162 pod_ready.go:81] duration metric: took 79.42835ms for pod "etcd-no-preload-264173" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:14.116120 1546162 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-264173" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:14.215691 1546162 pod_ready.go:92] pod "kube-apiserver-no-preload-264173" in "kube-system" namespace has status "Ready":"True"
	I0528 22:25:14.215761 1546162 pod_ready.go:81] duration metric: took 99.619809ms for pod "kube-apiserver-no-preload-264173" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:14.215787 1546162 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-264173" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:14.288368 1546162 pod_ready.go:92] pod "kube-controller-manager-no-preload-264173" in "kube-system" namespace has status "Ready":"True"
	I0528 22:25:14.288394 1546162 pod_ready.go:81] duration metric: took 72.587973ms for pod "kube-controller-manager-no-preload-264173" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:14.288443 1546162 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-clscp" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:14.352894 1546162 pod_ready.go:92] pod "kube-proxy-clscp" in "kube-system" namespace has status "Ready":"True"
	I0528 22:25:14.352919 1546162 pod_ready.go:81] duration metric: took 64.462688ms for pod "kube-proxy-clscp" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:14.352932 1546162 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-264173" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:15.197931 1546162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.527004285s)
	I0528 22:25:15.542957 1546162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.857915893s)
	I0528 22:25:15.545354 1546162 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-264173 addons enable metrics-server
	
	I0528 22:25:15.543240 1546162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.742832998s)
	I0528 22:25:15.545472 1546162 addons.go:475] Verifying addon metrics-server=true in "no-preload-264173"
	I0528 22:25:15.548025 1546162 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0528 22:25:15.550702 1546162 addons.go:510] duration metric: took 6.89360502s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0528 22:25:16.480792 1540905 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:18.482060 1540905 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:16.359570 1546162 pod_ready.go:102] pod "kube-scheduler-no-preload-264173" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:18.859476 1546162 pod_ready.go:102] pod "kube-scheduler-no-preload-264173" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:20.482153 1540905 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:20.982171 1540905 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-137556" in "kube-system" namespace has status "Ready":"True"
	I0528 22:25:20.982200 1540905 pod_ready.go:81] duration metric: took 9.007259537s for pod "kube-scheduler-old-k8s-version-137556" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:20.982213 1540905 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:22.989102 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:21.360330 1546162 pod_ready.go:102] pod "kube-scheduler-no-preload-264173" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:22.359497 1546162 pod_ready.go:92] pod "kube-scheduler-no-preload-264173" in "kube-system" namespace has status "Ready":"True"
	I0528 22:25:22.359522 1546162 pod_ready.go:81] duration metric: took 8.006582294s for pod "kube-scheduler-no-preload-264173" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:22.359534 1546162 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace to be "Ready" ...
	I0528 22:25:24.366683 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:24.989608 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:27.489518 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:26.865186 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:28.866064 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:30.868441 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:29.988533 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:31.991247 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:34.488194 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:33.366119 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:35.382357 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:36.495271 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:38.990348 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:37.870607 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:40.365901 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:41.495366 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:43.521873 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:42.372928 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:44.865398 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:45.988876 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:47.989265 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:46.866232 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:49.366258 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:50.488844 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:52.488950 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:54.492792 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:51.865030 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:53.867071 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:56.989265 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:59.488620 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:56.365998 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:25:58.865699 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:01.488762 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:03.988913 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:01.369144 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:03.865131 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:05.865426 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:06.488936 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:08.988177 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:07.865511 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:09.866721 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:10.988907 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:12.989263 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:12.366014 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:14.865384 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:15.489223 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:17.988232 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:16.865855 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:19.365485 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:19.988565 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:22.488224 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:24.489557 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:21.365826 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:23.865604 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:26.988463 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:28.990525 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:26.365967 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:28.864896 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:30.866151 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:31.487940 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:33.488677 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:33.366169 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:35.368478 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:35.988481 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:37.988691 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:37.866204 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:40.366949 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:39.989047 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:42.497971 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:42.865337 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:44.866077 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:44.988931 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:46.988984 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:49.488471 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:46.866308 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:48.866342 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:51.988480 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:54.487836 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:51.365534 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:53.866767 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:56.488805 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:58.988626 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:56.365798 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:26:58.369574 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:00.865211 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:00.988931 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:02.989274 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:02.865620 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:05.366191 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:05.487582 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:07.487769 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:09.488409 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:07.865654 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:09.866153 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:11.987932 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:13.993302 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:12.366125 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:14.366164 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:16.488026 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:18.988722 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:16.865446 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:18.865584 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:20.865851 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:20.988794 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:23.488388 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:22.866052 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:25.366082 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:25.489230 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:27.988524 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:27.366540 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:29.865189 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:29.989139 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:31.989250 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:34.488910 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:31.866146 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:34.366215 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:36.988638 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:39.488046 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:36.865667 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:38.865699 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:40.866373 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:41.488536 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:43.488804 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:43.366348 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:45.865183 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:45.988591 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:47.989534 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:47.865467 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:50.366506 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:50.488161 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:52.989006 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:52.866416 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:55.366488 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:55.015860 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:57.488310 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:59.488865 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:57.864930 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:27:59.864963 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:01.493382 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:03.988248 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:01.866864 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:04.365807 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:05.989110 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:08.492906 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:06.366042 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:08.368216 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:10.865738 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:10.988521 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:12.989371 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:12.866255 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:15.365881 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:15.488132 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:17.988756 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:17.366823 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:19.865098 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:20.488836 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:22.988169 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:21.865814 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:23.866249 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:25.492744 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:27.987967 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:26.365468 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:28.365944 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:30.366205 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:29.988864 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:31.989186 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:34.488439 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:32.865797 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:34.866904 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:36.488805 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:38.988909 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:37.365283 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:39.365967 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:41.488703 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:43.988369 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:41.865744 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:44.365783 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:45.988453 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:47.989135 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:46.865235 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:48.866110 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:49.989359 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:52.487694 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:54.488326 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:51.365618 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:53.366723 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:55.866483 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:56.489000 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:58.989339 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:28:58.365462 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:00.366117 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:01.488277 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:03.988703 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:02.866160 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:05.365989 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:05.989324 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:07.990979 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:07.865756 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:10.366219 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:10.489138 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:12.989242 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:12.865709 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:15.365199 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:15.488807 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:17.989551 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:17.366206 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:19.367278 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:20.489092 1540905 pod_ready.go:102] pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:20.988019 1540905 pod_ready.go:81] duration metric: took 4m0.005793638s for pod "metrics-server-9975d5f86-h5vcp" in "kube-system" namespace to be "Ready" ...
	E0528 22:29:20.988044 1540905 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0528 22:29:20.988093 1540905 pod_ready.go:38] duration metric: took 5m17.798780687s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:29:20.988114 1540905 api_server.go:52] waiting for apiserver process to appear ...
	I0528 22:29:20.988153 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 22:29:20.988223 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 22:29:21.032451 1540905 cri.go:89] found id: "9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3"
	I0528 22:29:21.032474 1540905 cri.go:89] found id: ""
	I0528 22:29:21.032489 1540905 logs.go:276] 1 containers: [9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3]
	I0528 22:29:21.032561 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.036290 1540905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 22:29:21.036360 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 22:29:21.078257 1540905 cri.go:89] found id: "c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8"
	I0528 22:29:21.078279 1540905 cri.go:89] found id: ""
	I0528 22:29:21.078288 1540905 logs.go:276] 1 containers: [c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8]
	I0528 22:29:21.078346 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.081979 1540905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 22:29:21.082082 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 22:29:21.128720 1540905 cri.go:89] found id: "cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a"
	I0528 22:29:21.128744 1540905 cri.go:89] found id: ""
	I0528 22:29:21.128752 1540905 logs.go:276] 1 containers: [cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a]
	I0528 22:29:21.128828 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.132418 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 22:29:21.132487 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 22:29:21.182402 1540905 cri.go:89] found id: "b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105"
	I0528 22:29:21.182464 1540905 cri.go:89] found id: ""
	I0528 22:29:21.182485 1540905 logs.go:276] 1 containers: [b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105]
	I0528 22:29:21.182568 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.186351 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 22:29:21.186428 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 22:29:21.236128 1540905 cri.go:89] found id: "7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b"
	I0528 22:29:21.236152 1540905 cri.go:89] found id: ""
	I0528 22:29:21.236160 1540905 logs.go:276] 1 containers: [7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b]
	I0528 22:29:21.236250 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.239897 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 22:29:21.239973 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 22:29:21.276341 1540905 cri.go:89] found id: "41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29"
	I0528 22:29:21.276373 1540905 cri.go:89] found id: ""
	I0528 22:29:21.276382 1540905 logs.go:276] 1 containers: [41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29]
	I0528 22:29:21.276440 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.280074 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 22:29:21.280157 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 22:29:21.329000 1540905 cri.go:89] found id: "dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5"
	I0528 22:29:21.329022 1540905 cri.go:89] found id: ""
	I0528 22:29:21.329030 1540905 logs.go:276] 1 containers: [dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5]
	I0528 22:29:21.329099 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.333459 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 22:29:21.333533 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 22:29:21.376968 1540905 cri.go:89] found id: "bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527"
	I0528 22:29:21.377039 1540905 cri.go:89] found id: ""
	I0528 22:29:21.377060 1540905 logs.go:276] 1 containers: [bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527]
	I0528 22:29:21.377147 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.381185 1540905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 22:29:21.381306 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 22:29:21.421871 1540905 cri.go:89] found id: "4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33"
	I0528 22:29:21.421943 1540905 cri.go:89] found id: "05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998"
	I0528 22:29:21.421962 1540905 cri.go:89] found id: ""
	I0528 22:29:21.421984 1540905 logs.go:276] 2 containers: [4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33 05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998]
	I0528 22:29:21.422120 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.425791 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:21.429266 1540905 logs.go:123] Gathering logs for dmesg ...
	I0528 22:29:21.429294 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 22:29:21.449313 1540905 logs.go:123] Gathering logs for kube-apiserver [9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3] ...
	I0528 22:29:21.449351 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3"
	I0528 22:29:21.519486 1540905 logs.go:123] Gathering logs for etcd [c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8] ...
	I0528 22:29:21.519521 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8"
	I0528 22:29:21.569520 1540905 logs.go:123] Gathering logs for coredns [cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a] ...
	I0528 22:29:21.569549 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a"
	I0528 22:29:21.613509 1540905 logs.go:123] Gathering logs for kube-controller-manager [41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29] ...
	I0528 22:29:21.613536 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29"
	I0528 22:29:21.686097 1540905 logs.go:123] Gathering logs for describe nodes ...
	I0528 22:29:21.686137 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 22:29:21.831954 1540905 logs.go:123] Gathering logs for storage-provisioner [4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33] ...
	I0528 22:29:21.831985 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33"
	I0528 22:29:21.881237 1540905 logs.go:123] Gathering logs for kube-scheduler [b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105] ...
	I0528 22:29:21.881266 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105"
	I0528 22:29:21.927283 1540905 logs.go:123] Gathering logs for kindnet [dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5] ...
	I0528 22:29:21.927310 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5"
	I0528 22:29:21.967506 1540905 logs.go:123] Gathering logs for storage-provisioner [05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998] ...
	I0528 22:29:21.967537 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998"
	I0528 22:29:22.013255 1540905 logs.go:123] Gathering logs for kubelet ...
	I0528 22:29:22.013284 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 22:29:22.069448 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.186601     738 reflector.go:138] object-"kube-system"/"kindnet-token-ztnvd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ztnvd" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.069707 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.187153     738 reflector.go:138] object-"kube-system"/"kube-proxy-token-cs2pc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-cs2pc" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.069920 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.187714     738 reflector.go:138] object-"default"/"default-token-k8l4j": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-k8l4j" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.070157 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.194216     738 reflector.go:138] object-"kube-system"/"storage-provisioner-token-cswbw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-cswbw" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.070359 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.199146     738 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.070571 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.199216     738 reflector.go:138] object-"kube-system"/"coredns-token-q7qdg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-q7qdg" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.070775 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.199262     738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.075454 1540905 logs.go:138] Found kubelet problem: May 28 22:24:04 old-k8s-version-137556 kubelet[738]: E0528 22:24:04.243616     738 reflector.go:138] object-"kube-system"/"metrics-server-token-42d6m": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-42d6m" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:22.079777 1540905 logs.go:138] Found kubelet problem: May 28 22:24:06 old-k8s-version-137556 kubelet[738]: E0528 22:24:06.119557     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:22.079965 1540905 logs.go:138] Found kubelet problem: May 28 22:24:06 old-k8s-version-137556 kubelet[738]: E0528 22:24:06.781597     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.082067 1540905 logs.go:138] Found kubelet problem: May 28 22:24:20 old-k8s-version-137556 kubelet[738]: E0528 22:24:20.203798     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:22.083824 1540905 logs.go:138] Found kubelet problem: May 28 22:24:33 old-k8s-version-137556 kubelet[738]: E0528 22:24:33.708202     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.084544 1540905 logs.go:138] Found kubelet problem: May 28 22:24:38 old-k8s-version-137556 kubelet[738]: E0528 22:24:38.049446     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.084872 1540905 logs.go:138] Found kubelet problem: May 28 22:24:39 old-k8s-version-137556 kubelet[738]: E0528 22:24:39.048542     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.085200 1540905 logs.go:138] Found kubelet problem: May 28 22:24:46 old-k8s-version-137556 kubelet[738]: E0528 22:24:46.690430     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.087272 1540905 logs.go:138] Found kubelet problem: May 28 22:24:48 old-k8s-version-137556 kubelet[738]: E0528 22:24:48.729167     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:22.087463 1540905 logs.go:138] Found kubelet problem: May 28 22:25:00 old-k8s-version-137556 kubelet[738]: E0528 22:25:00.705605     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.088052 1540905 logs.go:138] Found kubelet problem: May 28 22:25:02 old-k8s-version-137556 kubelet[738]: E0528 22:25:02.084189     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.088378 1540905 logs.go:138] Found kubelet problem: May 28 22:25:06 old-k8s-version-137556 kubelet[738]: E0528 22:25:06.690524     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.088562 1540905 logs.go:138] Found kubelet problem: May 28 22:25:13 old-k8s-version-137556 kubelet[738]: E0528 22:25:13.705095     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.088886 1540905 logs.go:138] Found kubelet problem: May 28 22:25:19 old-k8s-version-137556 kubelet[738]: E0528 22:25:19.704584     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.089072 1540905 logs.go:138] Found kubelet problem: May 28 22:25:24 old-k8s-version-137556 kubelet[738]: E0528 22:25:24.704685     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.089655 1540905 logs.go:138] Found kubelet problem: May 28 22:25:35 old-k8s-version-137556 kubelet[738]: E0528 22:25:35.141913     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.091744 1540905 logs.go:138] Found kubelet problem: May 28 22:25:35 old-k8s-version-137556 kubelet[738]: E0528 22:25:35.718176     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:22.092073 1540905 logs.go:138] Found kubelet problem: May 28 22:25:36 old-k8s-version-137556 kubelet[738]: E0528 22:25:36.690449     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.092399 1540905 logs.go:138] Found kubelet problem: May 28 22:25:47 old-k8s-version-137556 kubelet[738]: E0528 22:25:47.705446     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.092585 1540905 logs.go:138] Found kubelet problem: May 28 22:25:49 old-k8s-version-137556 kubelet[738]: E0528 22:25:49.705980     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.092918 1540905 logs.go:138] Found kubelet problem: May 28 22:26:00 old-k8s-version-137556 kubelet[738]: E0528 22:26:00.704155     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.093104 1540905 logs.go:138] Found kubelet problem: May 28 22:26:01 old-k8s-version-137556 kubelet[738]: E0528 22:26:01.704811     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.093692 1540905 logs.go:138] Found kubelet problem: May 28 22:26:15 old-k8s-version-137556 kubelet[738]: E0528 22:26:15.207095     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.094030 1540905 logs.go:138] Found kubelet problem: May 28 22:26:16 old-k8s-version-137556 kubelet[738]: E0528 22:26:16.690320     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.094212 1540905 logs.go:138] Found kubelet problem: May 28 22:26:16 old-k8s-version-137556 kubelet[738]: E0528 22:26:16.704608     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.094541 1540905 logs.go:138] Found kubelet problem: May 28 22:26:28 old-k8s-version-137556 kubelet[738]: E0528 22:26:28.704143     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.094726 1540905 logs.go:138] Found kubelet problem: May 28 22:26:29 old-k8s-version-137556 kubelet[738]: E0528 22:26:29.704616     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.095052 1540905 logs.go:138] Found kubelet problem: May 28 22:26:39 old-k8s-version-137556 kubelet[738]: E0528 22:26:39.704876     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.095237 1540905 logs.go:138] Found kubelet problem: May 28 22:26:43 old-k8s-version-137556 kubelet[738]: E0528 22:26:43.705110     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.096084 1540905 logs.go:138] Found kubelet problem: May 28 22:26:53 old-k8s-version-137556 kubelet[738]: E0528 22:26:53.704310     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.096269 1540905 logs.go:138] Found kubelet problem: May 28 22:26:54 old-k8s-version-137556 kubelet[738]: E0528 22:26:54.704655     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.096595 1540905 logs.go:138] Found kubelet problem: May 28 22:27:06 old-k8s-version-137556 kubelet[738]: E0528 22:27:06.704128     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.098660 1540905 logs.go:138] Found kubelet problem: May 28 22:27:09 old-k8s-version-137556 kubelet[738]: E0528 22:27:09.715928     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:22.098988 1540905 logs.go:138] Found kubelet problem: May 28 22:27:17 old-k8s-version-137556 kubelet[738]: E0528 22:27:17.704167     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.099172 1540905 logs.go:138] Found kubelet problem: May 28 22:27:21 old-k8s-version-137556 kubelet[738]: E0528 22:27:21.704885     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.099502 1540905 logs.go:138] Found kubelet problem: May 28 22:27:30 old-k8s-version-137556 kubelet[738]: E0528 22:27:30.704161     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.099687 1540905 logs.go:138] Found kubelet problem: May 28 22:27:33 old-k8s-version-137556 kubelet[738]: E0528 22:27:33.704586     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.100291 1540905 logs.go:138] Found kubelet problem: May 28 22:27:44 old-k8s-version-137556 kubelet[738]: E0528 22:27:44.340886     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.100622 1540905 logs.go:138] Found kubelet problem: May 28 22:27:46 old-k8s-version-137556 kubelet[738]: E0528 22:27:46.690347     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.100807 1540905 logs.go:138] Found kubelet problem: May 28 22:27:48 old-k8s-version-137556 kubelet[738]: E0528 22:27:48.704625     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.101135 1540905 logs.go:138] Found kubelet problem: May 28 22:27:58 old-k8s-version-137556 kubelet[738]: E0528 22:27:58.704805     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.101320 1540905 logs.go:138] Found kubelet problem: May 28 22:28:00 old-k8s-version-137556 kubelet[738]: E0528 22:28:00.704891     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.101634 1540905 logs.go:138] Found kubelet problem: May 28 22:28:12 old-k8s-version-137556 kubelet[738]: E0528 22:28:12.705029     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.101831 1540905 logs.go:138] Found kubelet problem: May 28 22:28:12 old-k8s-version-137556 kubelet[738]: E0528 22:28:12.705742     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.102015 1540905 logs.go:138] Found kubelet problem: May 28 22:28:24 old-k8s-version-137556 kubelet[738]: E0528 22:28:24.704663     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.102358 1540905 logs.go:138] Found kubelet problem: May 28 22:28:26 old-k8s-version-137556 kubelet[738]: E0528 22:28:26.704162     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.102544 1540905 logs.go:138] Found kubelet problem: May 28 22:28:39 old-k8s-version-137556 kubelet[738]: E0528 22:28:39.704787     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.102873 1540905 logs.go:138] Found kubelet problem: May 28 22:28:40 old-k8s-version-137556 kubelet[738]: E0528 22:28:40.704255     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.103452 1540905 logs.go:138] Found kubelet problem: May 28 22:28:52 old-k8s-version-137556 kubelet[738]: E0528 22:28:52.704205     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.103640 1540905 logs.go:138] Found kubelet problem: May 28 22:28:52 old-k8s-version-137556 kubelet[738]: E0528 22:28:52.705255     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.103825 1540905 logs.go:138] Found kubelet problem: May 28 22:29:04 old-k8s-version-137556 kubelet[738]: E0528 22:29:04.704842     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.104151 1540905 logs.go:138] Found kubelet problem: May 28 22:29:07 old-k8s-version-137556 kubelet[738]: E0528 22:29:07.704319     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.104340 1540905 logs.go:138] Found kubelet problem: May 28 22:29:18 old-k8s-version-137556 kubelet[738]: E0528 22:29:18.704693     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.104665 1540905 logs.go:138] Found kubelet problem: May 28 22:29:20 old-k8s-version-137556 kubelet[738]: E0528 22:29:20.704192     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	I0528 22:29:22.104674 1540905 logs.go:123] Gathering logs for kube-proxy [7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b] ...
	I0528 22:29:22.104688 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b"
	I0528 22:29:22.141117 1540905 logs.go:123] Gathering logs for kubernetes-dashboard [bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527] ...
	I0528 22:29:22.141148 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527"
	I0528 22:29:22.180279 1540905 logs.go:123] Gathering logs for CRI-O ...
	I0528 22:29:22.180303 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 22:29:22.275290 1540905 logs.go:123] Gathering logs for container status ...
	I0528 22:29:22.275341 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 22:29:22.320717 1540905 out.go:304] Setting ErrFile to fd 2...
	I0528 22:29:22.320742 1540905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 22:29:22.320829 1540905 out.go:239] X Problems detected in kubelet:
	W0528 22:29:22.320847 1540905 out.go:239]   May 28 22:28:52 old-k8s-version-137556 kubelet[738]: E0528 22:28:52.705255     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.320988 1540905 out.go:239]   May 28 22:29:04 old-k8s-version-137556 kubelet[738]: E0528 22:29:04.704842     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.321016 1540905 out.go:239]   May 28 22:29:07 old-k8s-version-137556 kubelet[738]: E0528 22:29:07.704319     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:22.321024 1540905 out.go:239]   May 28 22:29:18 old-k8s-version-137556 kubelet[738]: E0528 22:29:18.704693     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:22.321042 1540905 out.go:239]   May 28 22:29:20 old-k8s-version-137556 kubelet[738]: E0528 22:29:20.704192     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	I0528 22:29:22.321050 1540905 out.go:304] Setting ErrFile to fd 2...
	I0528 22:29:22.321057 1540905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:29:21.865988 1546162 pod_ready.go:102] pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace has status "Ready":"False"
	I0528 22:29:22.364912 1546162 pod_ready.go:81] duration metric: took 4m0.005364553s for pod "metrics-server-569cc877fc-62rwk" in "kube-system" namespace to be "Ready" ...
	E0528 22:29:22.364941 1546162 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0528 22:29:22.364952 1546162 pod_ready.go:38] duration metric: took 4m8.692307712s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:29:22.364968 1546162 api_server.go:52] waiting for apiserver process to appear ...
	I0528 22:29:22.364995 1546162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 22:29:22.365056 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 22:29:22.411170 1546162 cri.go:89] found id: "dfbd292c53c8b0af8de131291c95af8a9bc3f49d144fa7ee77b0e513547eeea2"
	I0528 22:29:22.411193 1546162 cri.go:89] found id: ""
	I0528 22:29:22.411200 1546162 logs.go:276] 1 containers: [dfbd292c53c8b0af8de131291c95af8a9bc3f49d144fa7ee77b0e513547eeea2]
	I0528 22:29:22.411259 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:22.414631 1546162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 22:29:22.414705 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 22:29:22.456030 1546162 cri.go:89] found id: "aa0c8e266cfe479bc662b630eb59e99e499e651a685ee88e02399bdbcc467fd8"
	I0528 22:29:22.456050 1546162 cri.go:89] found id: ""
	I0528 22:29:22.456058 1546162 logs.go:276] 1 containers: [aa0c8e266cfe479bc662b630eb59e99e499e651a685ee88e02399bdbcc467fd8]
	I0528 22:29:22.456116 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:22.459827 1546162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 22:29:22.459948 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 22:29:22.499320 1546162 cri.go:89] found id: "61c8c399180a585b6c4aaf6bb7e4f6bcb2bbcc6d0e8136d26cf4e034870ded78"
	I0528 22:29:22.499353 1546162 cri.go:89] found id: ""
	I0528 22:29:22.499362 1546162 logs.go:276] 1 containers: [61c8c399180a585b6c4aaf6bb7e4f6bcb2bbcc6d0e8136d26cf4e034870ded78]
	I0528 22:29:22.499457 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:22.503219 1546162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 22:29:22.503338 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 22:29:22.550148 1546162 cri.go:89] found id: "737b93290762f5f2b3818501d896453a1d5caa87b47dfd5d396910388c6eb87b"
	I0528 22:29:22.550210 1546162 cri.go:89] found id: ""
	I0528 22:29:22.550231 1546162 logs.go:276] 1 containers: [737b93290762f5f2b3818501d896453a1d5caa87b47dfd5d396910388c6eb87b]
	I0528 22:29:22.550313 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:22.554142 1546162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 22:29:22.554215 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 22:29:22.591936 1546162 cri.go:89] found id: "e2194e3ac3ff1b30b68520e9f14a14e44e4dd92c56de3c2795db55e43ab28b66"
	I0528 22:29:22.591957 1546162 cri.go:89] found id: ""
	I0528 22:29:22.591965 1546162 logs.go:276] 1 containers: [e2194e3ac3ff1b30b68520e9f14a14e44e4dd92c56de3c2795db55e43ab28b66]
	I0528 22:29:22.592020 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:22.595658 1546162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 22:29:22.595747 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 22:29:22.635760 1546162 cri.go:89] found id: "92648f040c3706797042c24570746c0268b8f17ee46beafa539de155e0258734"
	I0528 22:29:22.635780 1546162 cri.go:89] found id: ""
	I0528 22:29:22.635787 1546162 logs.go:276] 1 containers: [92648f040c3706797042c24570746c0268b8f17ee46beafa539de155e0258734]
	I0528 22:29:22.635842 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:22.639503 1546162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 22:29:22.639604 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 22:29:22.674552 1546162 cri.go:89] found id: "c436922d5e590ed71af77a5e0fb81ec0ce7e3930afceda784aae103ba74b428a"
	I0528 22:29:22.674623 1546162 cri.go:89] found id: ""
	I0528 22:29:22.674643 1546162 logs.go:276] 1 containers: [c436922d5e590ed71af77a5e0fb81ec0ce7e3930afceda784aae103ba74b428a]
	I0528 22:29:22.674722 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:22.678189 1546162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 22:29:22.678254 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 22:29:22.713766 1546162 cri.go:89] found id: "639c3df42842cbc3cacb6b0b6a54672c1485dbd20d7ef4bd83fbfe76d112e30a"
	I0528 22:29:22.713785 1546162 cri.go:89] found id: ""
	I0528 22:29:22.713793 1546162 logs.go:276] 1 containers: [639c3df42842cbc3cacb6b0b6a54672c1485dbd20d7ef4bd83fbfe76d112e30a]
	I0528 22:29:22.713870 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:22.717256 1546162 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 22:29:22.717327 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 22:29:22.757710 1546162 cri.go:89] found id: "db41507f25a5d10cf2c344b0be6d2933ce5190a98b22888de50a60e7699195a9"
	I0528 22:29:22.757734 1546162 cri.go:89] found id: "07a58c3fcc756c7e5878f9e675b373c250348ebf565e9c0acb5c499b9aac9129"
	I0528 22:29:22.757739 1546162 cri.go:89] found id: ""
	I0528 22:29:22.757746 1546162 logs.go:276] 2 containers: [db41507f25a5d10cf2c344b0be6d2933ce5190a98b22888de50a60e7699195a9 07a58c3fcc756c7e5878f9e675b373c250348ebf565e9c0acb5c499b9aac9129]
	I0528 22:29:22.757809 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:22.761367 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:22.764656 1546162 logs.go:123] Gathering logs for kubernetes-dashboard [639c3df42842cbc3cacb6b0b6a54672c1485dbd20d7ef4bd83fbfe76d112e30a] ...
	I0528 22:29:22.764727 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639c3df42842cbc3cacb6b0b6a54672c1485dbd20d7ef4bd83fbfe76d112e30a"
	I0528 22:29:22.813399 1546162 logs.go:123] Gathering logs for storage-provisioner [db41507f25a5d10cf2c344b0be6d2933ce5190a98b22888de50a60e7699195a9] ...
	I0528 22:29:22.813426 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db41507f25a5d10cf2c344b0be6d2933ce5190a98b22888de50a60e7699195a9"
	I0528 22:29:22.864851 1546162 logs.go:123] Gathering logs for storage-provisioner [07a58c3fcc756c7e5878f9e675b373c250348ebf565e9c0acb5c499b9aac9129] ...
	I0528 22:29:22.864879 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a58c3fcc756c7e5878f9e675b373c250348ebf565e9c0acb5c499b9aac9129"
	I0528 22:29:22.905620 1546162 logs.go:123] Gathering logs for etcd [aa0c8e266cfe479bc662b630eb59e99e499e651a685ee88e02399bdbcc467fd8] ...
	I0528 22:29:22.905654 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa0c8e266cfe479bc662b630eb59e99e499e651a685ee88e02399bdbcc467fd8"
	I0528 22:29:22.954101 1546162 logs.go:123] Gathering logs for coredns [61c8c399180a585b6c4aaf6bb7e4f6bcb2bbcc6d0e8136d26cf4e034870ded78] ...
	I0528 22:29:22.954132 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c8c399180a585b6c4aaf6bb7e4f6bcb2bbcc6d0e8136d26cf4e034870ded78"
	I0528 22:29:22.994855 1546162 logs.go:123] Gathering logs for kindnet [c436922d5e590ed71af77a5e0fb81ec0ce7e3930afceda784aae103ba74b428a] ...
	I0528 22:29:22.994899 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c436922d5e590ed71af77a5e0fb81ec0ce7e3930afceda784aae103ba74b428a"
	I0528 22:29:23.048998 1546162 logs.go:123] Gathering logs for CRI-O ...
	I0528 22:29:23.049028 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 22:29:23.129334 1546162 logs.go:123] Gathering logs for container status ...
	I0528 22:29:23.129369 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 22:29:23.205581 1546162 logs.go:123] Gathering logs for kube-apiserver [dfbd292c53c8b0af8de131291c95af8a9bc3f49d144fa7ee77b0e513547eeea2] ...
	I0528 22:29:23.205611 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfbd292c53c8b0af8de131291c95af8a9bc3f49d144fa7ee77b0e513547eeea2"
	I0528 22:29:23.258182 1546162 logs.go:123] Gathering logs for kube-proxy [e2194e3ac3ff1b30b68520e9f14a14e44e4dd92c56de3c2795db55e43ab28b66] ...
	I0528 22:29:23.258218 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2194e3ac3ff1b30b68520e9f14a14e44e4dd92c56de3c2795db55e43ab28b66"
	I0528 22:29:23.301675 1546162 logs.go:123] Gathering logs for describe nodes ...
	I0528 22:29:23.301704 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 22:29:23.447810 1546162 logs.go:123] Gathering logs for kube-scheduler [737b93290762f5f2b3818501d896453a1d5caa87b47dfd5d396910388c6eb87b] ...
	I0528 22:29:23.447847 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 737b93290762f5f2b3818501d896453a1d5caa87b47dfd5d396910388c6eb87b"
	I0528 22:29:23.496843 1546162 logs.go:123] Gathering logs for kube-controller-manager [92648f040c3706797042c24570746c0268b8f17ee46beafa539de155e0258734] ...
	I0528 22:29:23.496871 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92648f040c3706797042c24570746c0268b8f17ee46beafa539de155e0258734"
	I0528 22:29:23.572239 1546162 logs.go:123] Gathering logs for kubelet ...
	I0528 22:29:23.572273 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 22:29:23.618623 1546162 logs.go:138] Found kubelet problem: May 28 22:25:28 no-preload-264173 kubelet[747]: W0528 22:25:28.245329     747 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-264173" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-264173' and this object
	W0528 22:29:23.618856 1546162 logs.go:138] Found kubelet problem: May 28 22:25:28 no-preload-264173 kubelet[747]: E0528 22:25:28.245366     747 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-264173" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-264173' and this object
	I0528 22:29:23.641158 1546162 logs.go:123] Gathering logs for dmesg ...
	I0528 22:29:23.641188 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 22:29:23.660927 1546162 out.go:304] Setting ErrFile to fd 2...
	I0528 22:29:23.660955 1546162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 22:29:23.661013 1546162 out.go:239] X Problems detected in kubelet:
	W0528 22:29:23.661022 1546162 out.go:239]   May 28 22:25:28 no-preload-264173 kubelet[747]: W0528 22:25:28.245329     747 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-264173" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-264173' and this object
	W0528 22:29:23.661029 1546162 out.go:239]   May 28 22:25:28 no-preload-264173 kubelet[747]: E0528 22:25:28.245366     747 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-264173" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-264173' and this object
	I0528 22:29:23.661042 1546162 out.go:304] Setting ErrFile to fd 2...
	I0528 22:29:23.661048 1546162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:29:32.323043 1540905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:29:32.335698 1540905 api_server.go:72] duration metric: took 5m48.896786162s to wait for apiserver process to appear ...
	I0528 22:29:32.335723 1540905 api_server.go:88] waiting for apiserver healthz status ...
	I0528 22:29:32.335762 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 22:29:32.335824 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 22:29:32.374464 1540905 cri.go:89] found id: "9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3"
	I0528 22:29:32.374485 1540905 cri.go:89] found id: ""
	I0528 22:29:32.374494 1540905 logs.go:276] 1 containers: [9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3]
	I0528 22:29:32.374556 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.378665 1540905 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 22:29:32.378735 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 22:29:32.428280 1540905 cri.go:89] found id: "c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8"
	I0528 22:29:32.428310 1540905 cri.go:89] found id: ""
	I0528 22:29:32.428319 1540905 logs.go:276] 1 containers: [c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8]
	I0528 22:29:32.428377 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.432088 1540905 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 22:29:32.432153 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 22:29:32.474292 1540905 cri.go:89] found id: "cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a"
	I0528 22:29:32.474312 1540905 cri.go:89] found id: ""
	I0528 22:29:32.474319 1540905 logs.go:276] 1 containers: [cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a]
	I0528 22:29:32.474376 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.477891 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 22:29:32.477960 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 22:29:32.521895 1540905 cri.go:89] found id: "b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105"
	I0528 22:29:32.521917 1540905 cri.go:89] found id: ""
	I0528 22:29:32.521925 1540905 logs.go:276] 1 containers: [b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105]
	I0528 22:29:32.521982 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.526385 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 22:29:32.526465 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 22:29:32.568221 1540905 cri.go:89] found id: "7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b"
	I0528 22:29:32.568241 1540905 cri.go:89] found id: ""
	I0528 22:29:32.568249 1540905 logs.go:276] 1 containers: [7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b]
	I0528 22:29:32.568304 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.571975 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 22:29:32.572079 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 22:29:32.609486 1540905 cri.go:89] found id: "41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29"
	I0528 22:29:32.609510 1540905 cri.go:89] found id: ""
	I0528 22:29:32.609517 1540905 logs.go:276] 1 containers: [41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29]
	I0528 22:29:32.609574 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.612947 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 22:29:32.613016 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 22:29:32.652655 1540905 cri.go:89] found id: "dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5"
	I0528 22:29:32.652680 1540905 cri.go:89] found id: ""
	I0528 22:29:32.652689 1540905 logs.go:276] 1 containers: [dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5]
	I0528 22:29:32.652748 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.656315 1540905 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 22:29:32.656442 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 22:29:32.693909 1540905 cri.go:89] found id: "bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527"
	I0528 22:29:32.693941 1540905 cri.go:89] found id: ""
	I0528 22:29:32.693949 1540905 logs.go:276] 1 containers: [bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527]
	I0528 22:29:32.694072 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.697734 1540905 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 22:29:32.697815 1540905 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 22:29:32.736027 1540905 cri.go:89] found id: "4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33"
	I0528 22:29:32.736053 1540905 cri.go:89] found id: "05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998"
	I0528 22:29:32.736058 1540905 cri.go:89] found id: ""
	I0528 22:29:32.736065 1540905 logs.go:276] 2 containers: [4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33 05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998]
	I0528 22:29:32.736150 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.742328 1540905 ssh_runner.go:195] Run: which crictl
	I0528 22:29:32.746335 1540905 logs.go:123] Gathering logs for storage-provisioner [4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33] ...
	I0528 22:29:32.746363 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33"
	I0528 22:29:32.783928 1540905 logs.go:123] Gathering logs for storage-provisioner [05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998] ...
	I0528 22:29:32.783955 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998"
	I0528 22:29:32.821570 1540905 logs.go:123] Gathering logs for CRI-O ...
	I0528 22:29:32.821598 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 22:29:32.907550 1540905 logs.go:123] Gathering logs for kubernetes-dashboard [bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527] ...
	I0528 22:29:32.907629 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527"
	I0528 22:29:32.952689 1540905 logs.go:123] Gathering logs for dmesg ...
	I0528 22:29:32.952720 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 22:29:32.971568 1540905 logs.go:123] Gathering logs for etcd [c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8] ...
	I0528 22:29:32.971598 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8"
	I0528 22:29:33.039002 1540905 logs.go:123] Gathering logs for kube-scheduler [b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105] ...
	I0528 22:29:33.039034 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105"
	I0528 22:29:33.090654 1540905 logs.go:123] Gathering logs for kube-proxy [7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b] ...
	I0528 22:29:33.090687 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b"
	I0528 22:29:33.136556 1540905 logs.go:123] Gathering logs for kindnet [dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5] ...
	I0528 22:29:33.136585 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5"
	I0528 22:29:33.192155 1540905 logs.go:123] Gathering logs for kubelet ...
	I0528 22:29:33.192228 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 22:29:33.246432 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.186601     738 reflector.go:138] object-"kube-system"/"kindnet-token-ztnvd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ztnvd" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.246672 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.187153     738 reflector.go:138] object-"kube-system"/"kube-proxy-token-cs2pc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-cs2pc" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.246887 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.187714     738 reflector.go:138] object-"default"/"default-token-k8l4j": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-k8l4j" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.247119 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.194216     738 reflector.go:138] object-"kube-system"/"storage-provisioner-token-cswbw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-cswbw" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.247320 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.199146     738 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.247555 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.199216     738 reflector.go:138] object-"kube-system"/"coredns-token-q7qdg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-q7qdg" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.247760 1540905 logs.go:138] Found kubelet problem: May 28 22:24:03 old-k8s-version-137556 kubelet[738]: E0528 22:24:03.199262     738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.252483 1540905 logs.go:138] Found kubelet problem: May 28 22:24:04 old-k8s-version-137556 kubelet[738]: E0528 22:24:04.243616     738 reflector.go:138] object-"kube-system"/"metrics-server-token-42d6m": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-42d6m" is forbidden: User "system:node:old-k8s-version-137556" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-137556' and this object
	W0528 22:29:33.256917 1540905 logs.go:138] Found kubelet problem: May 28 22:24:06 old-k8s-version-137556 kubelet[738]: E0528 22:24:06.119557     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:33.257113 1540905 logs.go:138] Found kubelet problem: May 28 22:24:06 old-k8s-version-137556 kubelet[738]: E0528 22:24:06.781597     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.259198 1540905 logs.go:138] Found kubelet problem: May 28 22:24:20 old-k8s-version-137556 kubelet[738]: E0528 22:24:20.203798     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:33.260974 1540905 logs.go:138] Found kubelet problem: May 28 22:24:33 old-k8s-version-137556 kubelet[738]: E0528 22:24:33.708202     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.261692 1540905 logs.go:138] Found kubelet problem: May 28 22:24:38 old-k8s-version-137556 kubelet[738]: E0528 22:24:38.049446     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.262026 1540905 logs.go:138] Found kubelet problem: May 28 22:24:39 old-k8s-version-137556 kubelet[738]: E0528 22:24:39.048542     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.262358 1540905 logs.go:138] Found kubelet problem: May 28 22:24:46 old-k8s-version-137556 kubelet[738]: E0528 22:24:46.690430     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.265262 1540905 logs.go:138] Found kubelet problem: May 28 22:24:48 old-k8s-version-137556 kubelet[738]: E0528 22:24:48.729167     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:33.265472 1540905 logs.go:138] Found kubelet problem: May 28 22:25:00 old-k8s-version-137556 kubelet[738]: E0528 22:25:00.705605     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.266085 1540905 logs.go:138] Found kubelet problem: May 28 22:25:02 old-k8s-version-137556 kubelet[738]: E0528 22:25:02.084189     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.266416 1540905 logs.go:138] Found kubelet problem: May 28 22:25:06 old-k8s-version-137556 kubelet[738]: E0528 22:25:06.690524     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.266603 1540905 logs.go:138] Found kubelet problem: May 28 22:25:13 old-k8s-version-137556 kubelet[738]: E0528 22:25:13.705095     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.266933 1540905 logs.go:138] Found kubelet problem: May 28 22:25:19 old-k8s-version-137556 kubelet[738]: E0528 22:25:19.704584     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.267118 1540905 logs.go:138] Found kubelet problem: May 28 22:25:24 old-k8s-version-137556 kubelet[738]: E0528 22:25:24.704685     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.267735 1540905 logs.go:138] Found kubelet problem: May 28 22:25:35 old-k8s-version-137556 kubelet[738]: E0528 22:25:35.141913     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.269944 1540905 logs.go:138] Found kubelet problem: May 28 22:25:35 old-k8s-version-137556 kubelet[738]: E0528 22:25:35.718176     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:33.270294 1540905 logs.go:138] Found kubelet problem: May 28 22:25:36 old-k8s-version-137556 kubelet[738]: E0528 22:25:36.690449     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.270624 1540905 logs.go:138] Found kubelet problem: May 28 22:25:47 old-k8s-version-137556 kubelet[738]: E0528 22:25:47.705446     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.271334 1540905 logs.go:138] Found kubelet problem: May 28 22:25:49 old-k8s-version-137556 kubelet[738]: E0528 22:25:49.705980     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.271686 1540905 logs.go:138] Found kubelet problem: May 28 22:26:00 old-k8s-version-137556 kubelet[738]: E0528 22:26:00.704155     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.271875 1540905 logs.go:138] Found kubelet problem: May 28 22:26:01 old-k8s-version-137556 kubelet[738]: E0528 22:26:01.704811     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.272461 1540905 logs.go:138] Found kubelet problem: May 28 22:26:15 old-k8s-version-137556 kubelet[738]: E0528 22:26:15.207095     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.272794 1540905 logs.go:138] Found kubelet problem: May 28 22:26:16 old-k8s-version-137556 kubelet[738]: E0528 22:26:16.690320     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.272980 1540905 logs.go:138] Found kubelet problem: May 28 22:26:16 old-k8s-version-137556 kubelet[738]: E0528 22:26:16.704608     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.273349 1540905 logs.go:138] Found kubelet problem: May 28 22:26:28 old-k8s-version-137556 kubelet[738]: E0528 22:26:28.704143     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.273541 1540905 logs.go:138] Found kubelet problem: May 28 22:26:29 old-k8s-version-137556 kubelet[738]: E0528 22:26:29.704616     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.273874 1540905 logs.go:138] Found kubelet problem: May 28 22:26:39 old-k8s-version-137556 kubelet[738]: E0528 22:26:39.704876     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.274074 1540905 logs.go:138] Found kubelet problem: May 28 22:26:43 old-k8s-version-137556 kubelet[738]: E0528 22:26:43.705110     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.274980 1540905 logs.go:138] Found kubelet problem: May 28 22:26:53 old-k8s-version-137556 kubelet[738]: E0528 22:26:53.704310     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.275170 1540905 logs.go:138] Found kubelet problem: May 28 22:26:54 old-k8s-version-137556 kubelet[738]: E0528 22:26:54.704655     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.275499 1540905 logs.go:138] Found kubelet problem: May 28 22:27:06 old-k8s-version-137556 kubelet[738]: E0528 22:27:06.704128     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.277543 1540905 logs.go:138] Found kubelet problem: May 28 22:27:09 old-k8s-version-137556 kubelet[738]: E0528 22:27:09.715928     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0528 22:29:33.277870 1540905 logs.go:138] Found kubelet problem: May 28 22:27:17 old-k8s-version-137556 kubelet[738]: E0528 22:27:17.704167     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.278074 1540905 logs.go:138] Found kubelet problem: May 28 22:27:21 old-k8s-version-137556 kubelet[738]: E0528 22:27:21.704885     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.278407 1540905 logs.go:138] Found kubelet problem: May 28 22:27:30 old-k8s-version-137556 kubelet[738]: E0528 22:27:30.704161     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.278671 1540905 logs.go:138] Found kubelet problem: May 28 22:27:33 old-k8s-version-137556 kubelet[738]: E0528 22:27:33.704586     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.279301 1540905 logs.go:138] Found kubelet problem: May 28 22:27:44 old-k8s-version-137556 kubelet[738]: E0528 22:27:44.340886     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.279669 1540905 logs.go:138] Found kubelet problem: May 28 22:27:46 old-k8s-version-137556 kubelet[738]: E0528 22:27:46.690347     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.279882 1540905 logs.go:138] Found kubelet problem: May 28 22:27:48 old-k8s-version-137556 kubelet[738]: E0528 22:27:48.704625     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.280239 1540905 logs.go:138] Found kubelet problem: May 28 22:27:58 old-k8s-version-137556 kubelet[738]: E0528 22:27:58.704805     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.280461 1540905 logs.go:138] Found kubelet problem: May 28 22:28:00 old-k8s-version-137556 kubelet[738]: E0528 22:28:00.704891     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.280810 1540905 logs.go:138] Found kubelet problem: May 28 22:28:12 old-k8s-version-137556 kubelet[738]: E0528 22:28:12.705029     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.281039 1540905 logs.go:138] Found kubelet problem: May 28 22:28:12 old-k8s-version-137556 kubelet[738]: E0528 22:28:12.705742     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.281253 1540905 logs.go:138] Found kubelet problem: May 28 22:28:24 old-k8s-version-137556 kubelet[738]: E0528 22:28:24.704663     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.281601 1540905 logs.go:138] Found kubelet problem: May 28 22:28:26 old-k8s-version-137556 kubelet[738]: E0528 22:28:26.704162     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.281815 1540905 logs.go:138] Found kubelet problem: May 28 22:28:39 old-k8s-version-137556 kubelet[738]: E0528 22:28:39.704787     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.282338 1540905 logs.go:138] Found kubelet problem: May 28 22:28:40 old-k8s-version-137556 kubelet[738]: E0528 22:28:40.704255     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.282977 1540905 logs.go:138] Found kubelet problem: May 28 22:28:52 old-k8s-version-137556 kubelet[738]: E0528 22:28:52.704205     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.283198 1540905 logs.go:138] Found kubelet problem: May 28 22:28:52 old-k8s-version-137556 kubelet[738]: E0528 22:28:52.705255     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.283416 1540905 logs.go:138] Found kubelet problem: May 28 22:29:04 old-k8s-version-137556 kubelet[738]: E0528 22:29:04.704842     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.284178 1540905 logs.go:138] Found kubelet problem: May 28 22:29:07 old-k8s-version-137556 kubelet[738]: E0528 22:29:07.704319     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.284427 1540905 logs.go:138] Found kubelet problem: May 28 22:29:18 old-k8s-version-137556 kubelet[738]: E0528 22:29:18.704693     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.284797 1540905 logs.go:138] Found kubelet problem: May 28 22:29:20 old-k8s-version-137556 kubelet[738]: E0528 22:29:20.704192     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.285031 1540905 logs.go:138] Found kubelet problem: May 28 22:29:29 old-k8s-version-137556 kubelet[738]: E0528 22:29:29.704635     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0528 22:29:33.285058 1540905 logs.go:123] Gathering logs for kube-apiserver [9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3] ...
	I0528 22:29:33.285090 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3"
	I0528 22:29:33.358393 1540905 logs.go:123] Gathering logs for container status ...
	I0528 22:29:33.358434 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 22:29:33.410155 1540905 logs.go:123] Gathering logs for describe nodes ...
	I0528 22:29:33.410232 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 22:29:33.558604 1540905 logs.go:123] Gathering logs for coredns [cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a] ...
	I0528 22:29:33.558637 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a"
	I0528 22:29:33.598686 1540905 logs.go:123] Gathering logs for kube-controller-manager [41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29] ...
	I0528 22:29:33.598716 1540905 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29"
	I0528 22:29:33.670199 1540905 out.go:304] Setting ErrFile to fd 2...
	I0528 22:29:33.670230 1540905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 22:29:33.670308 1540905 out.go:239] X Problems detected in kubelet:
	W0528 22:29:33.670324 1540905 out.go:239]   May 28 22:29:04 old-k8s-version-137556 kubelet[738]: E0528 22:29:04.704842     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.670455 1540905 out.go:239]   May 28 22:29:07 old-k8s-version-137556 kubelet[738]: E0528 22:29:07.704319     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.670464 1540905 out.go:239]   May 28 22:29:18 old-k8s-version-137556 kubelet[738]: E0528 22:29:18.704693     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:29:33.670488 1540905 out.go:239]   May 28 22:29:20 old-k8s-version-137556 kubelet[738]: E0528 22:29:20.704192     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	W0528 22:29:33.670497 1540905 out.go:239]   May 28 22:29:29 old-k8s-version-137556 kubelet[738]: E0528 22:29:29.704635     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0528 22:29:33.670507 1540905 out.go:304] Setting ErrFile to fd 2...
	I0528 22:29:33.670514 1540905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:29:33.662036 1546162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:29:33.675487 1546162 api_server.go:72] duration metric: took 4m25.018661609s to wait for apiserver process to appear ...
	I0528 22:29:33.675517 1546162 api_server.go:88] waiting for apiserver healthz status ...
	I0528 22:29:33.675562 1546162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 22:29:33.675622 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 22:29:33.727122 1546162 cri.go:89] found id: "dfbd292c53c8b0af8de131291c95af8a9bc3f49d144fa7ee77b0e513547eeea2"
	I0528 22:29:33.727144 1546162 cri.go:89] found id: ""
	I0528 22:29:33.727152 1546162 logs.go:276] 1 containers: [dfbd292c53c8b0af8de131291c95af8a9bc3f49d144fa7ee77b0e513547eeea2]
	I0528 22:29:33.727206 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:33.730731 1546162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 22:29:33.730813 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 22:29:33.771261 1546162 cri.go:89] found id: "aa0c8e266cfe479bc662b630eb59e99e499e651a685ee88e02399bdbcc467fd8"
	I0528 22:29:33.771285 1546162 cri.go:89] found id: ""
	I0528 22:29:33.771292 1546162 logs.go:276] 1 containers: [aa0c8e266cfe479bc662b630eb59e99e499e651a685ee88e02399bdbcc467fd8]
	I0528 22:29:33.771345 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:33.775371 1546162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 22:29:33.775448 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 22:29:33.811510 1546162 cri.go:89] found id: "61c8c399180a585b6c4aaf6bb7e4f6bcb2bbcc6d0e8136d26cf4e034870ded78"
	I0528 22:29:33.811529 1546162 cri.go:89] found id: ""
	I0528 22:29:33.811537 1546162 logs.go:276] 1 containers: [61c8c399180a585b6c4aaf6bb7e4f6bcb2bbcc6d0e8136d26cf4e034870ded78]
	I0528 22:29:33.811591 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:33.815319 1546162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 22:29:33.815395 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 22:29:33.852867 1546162 cri.go:89] found id: "737b93290762f5f2b3818501d896453a1d5caa87b47dfd5d396910388c6eb87b"
	I0528 22:29:33.852889 1546162 cri.go:89] found id: ""
	I0528 22:29:33.852897 1546162 logs.go:276] 1 containers: [737b93290762f5f2b3818501d896453a1d5caa87b47dfd5d396910388c6eb87b]
	I0528 22:29:33.852979 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:33.856417 1546162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 22:29:33.856487 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 22:29:33.892204 1546162 cri.go:89] found id: "e2194e3ac3ff1b30b68520e9f14a14e44e4dd92c56de3c2795db55e43ab28b66"
	I0528 22:29:33.892226 1546162 cri.go:89] found id: ""
	I0528 22:29:33.892235 1546162 logs.go:276] 1 containers: [e2194e3ac3ff1b30b68520e9f14a14e44e4dd92c56de3c2795db55e43ab28b66]
	I0528 22:29:33.892312 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:33.896455 1546162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 22:29:33.896534 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 22:29:33.937747 1546162 cri.go:89] found id: "92648f040c3706797042c24570746c0268b8f17ee46beafa539de155e0258734"
	I0528 22:29:33.937770 1546162 cri.go:89] found id: ""
	I0528 22:29:33.937777 1546162 logs.go:276] 1 containers: [92648f040c3706797042c24570746c0268b8f17ee46beafa539de155e0258734]
	I0528 22:29:33.937866 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:33.941524 1546162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 22:29:33.941648 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 22:29:33.979806 1546162 cri.go:89] found id: "c436922d5e590ed71af77a5e0fb81ec0ce7e3930afceda784aae103ba74b428a"
	I0528 22:29:33.979826 1546162 cri.go:89] found id: ""
	I0528 22:29:33.979834 1546162 logs.go:276] 1 containers: [c436922d5e590ed71af77a5e0fb81ec0ce7e3930afceda784aae103ba74b428a]
	I0528 22:29:33.979906 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:33.983431 1546162 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 22:29:33.983502 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 22:29:34.029580 1546162 cri.go:89] found id: "db41507f25a5d10cf2c344b0be6d2933ce5190a98b22888de50a60e7699195a9"
	I0528 22:29:34.029603 1546162 cri.go:89] found id: "07a58c3fcc756c7e5878f9e675b373c250348ebf565e9c0acb5c499b9aac9129"
	I0528 22:29:34.029609 1546162 cri.go:89] found id: ""
	I0528 22:29:34.029615 1546162 logs.go:276] 2 containers: [db41507f25a5d10cf2c344b0be6d2933ce5190a98b22888de50a60e7699195a9 07a58c3fcc756c7e5878f9e675b373c250348ebf565e9c0acb5c499b9aac9129]
	I0528 22:29:34.029691 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:34.033489 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:34.037059 1546162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 22:29:34.037179 1546162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 22:29:34.080855 1546162 cri.go:89] found id: "639c3df42842cbc3cacb6b0b6a54672c1485dbd20d7ef4bd83fbfe76d112e30a"
	I0528 22:29:34.080876 1546162 cri.go:89] found id: ""
	I0528 22:29:34.080892 1546162 logs.go:276] 1 containers: [639c3df42842cbc3cacb6b0b6a54672c1485dbd20d7ef4bd83fbfe76d112e30a]
	I0528 22:29:34.080948 1546162 ssh_runner.go:195] Run: which crictl
	I0528 22:29:34.085211 1546162 logs.go:123] Gathering logs for kindnet [c436922d5e590ed71af77a5e0fb81ec0ce7e3930afceda784aae103ba74b428a] ...
	I0528 22:29:34.085238 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c436922d5e590ed71af77a5e0fb81ec0ce7e3930afceda784aae103ba74b428a"
	I0528 22:29:34.127620 1546162 logs.go:123] Gathering logs for kubernetes-dashboard [639c3df42842cbc3cacb6b0b6a54672c1485dbd20d7ef4bd83fbfe76d112e30a] ...
	I0528 22:29:34.127656 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639c3df42842cbc3cacb6b0b6a54672c1485dbd20d7ef4bd83fbfe76d112e30a"
	I0528 22:29:34.171103 1546162 logs.go:123] Gathering logs for CRI-O ...
	I0528 22:29:34.171134 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 22:29:34.247445 1546162 logs.go:123] Gathering logs for etcd [aa0c8e266cfe479bc662b630eb59e99e499e651a685ee88e02399bdbcc467fd8] ...
	I0528 22:29:34.247481 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa0c8e266cfe479bc662b630eb59e99e499e651a685ee88e02399bdbcc467fd8"
	I0528 22:29:34.298824 1546162 logs.go:123] Gathering logs for coredns [61c8c399180a585b6c4aaf6bb7e4f6bcb2bbcc6d0e8136d26cf4e034870ded78] ...
	I0528 22:29:34.298856 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c8c399180a585b6c4aaf6bb7e4f6bcb2bbcc6d0e8136d26cf4e034870ded78"
	I0528 22:29:34.349659 1546162 logs.go:123] Gathering logs for storage-provisioner [db41507f25a5d10cf2c344b0be6d2933ce5190a98b22888de50a60e7699195a9] ...
	I0528 22:29:34.349688 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db41507f25a5d10cf2c344b0be6d2933ce5190a98b22888de50a60e7699195a9"
	I0528 22:29:34.399282 1546162 logs.go:123] Gathering logs for storage-provisioner [07a58c3fcc756c7e5878f9e675b373c250348ebf565e9c0acb5c499b9aac9129] ...
	I0528 22:29:34.399307 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a58c3fcc756c7e5878f9e675b373c250348ebf565e9c0acb5c499b9aac9129"
	I0528 22:29:34.439298 1546162 logs.go:123] Gathering logs for container status ...
	I0528 22:29:34.439328 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 22:29:34.488734 1546162 logs.go:123] Gathering logs for kubelet ...
	I0528 22:29:34.488764 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 22:29:34.537884 1546162 logs.go:138] Found kubelet problem: May 28 22:25:28 no-preload-264173 kubelet[747]: W0528 22:25:28.245329     747 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-264173" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-264173' and this object
	W0528 22:29:34.538138 1546162 logs.go:138] Found kubelet problem: May 28 22:25:28 no-preload-264173 kubelet[747]: E0528 22:25:28.245366     747 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-264173" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-264173' and this object
	I0528 22:29:34.561367 1546162 logs.go:123] Gathering logs for dmesg ...
	I0528 22:29:34.561399 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 22:29:34.581705 1546162 logs.go:123] Gathering logs for describe nodes ...
	I0528 22:29:34.581735 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 22:29:34.709346 1546162 logs.go:123] Gathering logs for kube-apiserver [dfbd292c53c8b0af8de131291c95af8a9bc3f49d144fa7ee77b0e513547eeea2] ...
	I0528 22:29:34.709377 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfbd292c53c8b0af8de131291c95af8a9bc3f49d144fa7ee77b0e513547eeea2"
	I0528 22:29:34.779780 1546162 logs.go:123] Gathering logs for kube-scheduler [737b93290762f5f2b3818501d896453a1d5caa87b47dfd5d396910388c6eb87b] ...
	I0528 22:29:34.779859 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 737b93290762f5f2b3818501d896453a1d5caa87b47dfd5d396910388c6eb87b"
	I0528 22:29:34.832321 1546162 logs.go:123] Gathering logs for kube-proxy [e2194e3ac3ff1b30b68520e9f14a14e44e4dd92c56de3c2795db55e43ab28b66] ...
	I0528 22:29:34.832351 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2194e3ac3ff1b30b68520e9f14a14e44e4dd92c56de3c2795db55e43ab28b66"
	I0528 22:29:34.869988 1546162 logs.go:123] Gathering logs for kube-controller-manager [92648f040c3706797042c24570746c0268b8f17ee46beafa539de155e0258734] ...
	I0528 22:29:34.870040 1546162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92648f040c3706797042c24570746c0268b8f17ee46beafa539de155e0258734"
	I0528 22:29:34.928666 1546162 out.go:304] Setting ErrFile to fd 2...
	I0528 22:29:34.928700 1546162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 22:29:34.928771 1546162 out.go:239] X Problems detected in kubelet:
	W0528 22:29:34.928787 1546162 out.go:239]   May 28 22:25:28 no-preload-264173 kubelet[747]: W0528 22:25:28.245329     747 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-264173" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-264173' and this object
	W0528 22:29:34.928795 1546162 out.go:239]   May 28 22:25:28 no-preload-264173 kubelet[747]: E0528 22:25:28.245366     747 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-264173" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-264173' and this object
	I0528 22:29:34.928916 1546162 out.go:304] Setting ErrFile to fd 2...
	I0528 22:29:34.928923 1546162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:29:43.671573 1540905 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0528 22:29:43.691524 1540905 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0528 22:29:43.693704 1540905 out.go:177] 
	W0528 22:29:43.696067 1540905 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0528 22:29:43.696109 1540905 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0528 22:29:43.696128 1540905 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0528 22:29:43.696133 1540905 out.go:239] * 
	W0528 22:29:43.697172 1540905 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 22:29:43.699282 1540905 out.go:177] 
	
	
	==> CRI-O <==
	May 28 22:27:43 old-k8s-version-137556 conmon[2351]: conmon 9c6d3e2adbe78524504e <ninfo>: container 2362 exited with status 1
	May 28 22:27:44 old-k8s-version-137556 crio[625]: time="2024-05-28 22:27:44.341951600Z" level=info msg="Removing container: ebdef6f57e7753d65d025f754096305c549444aee4586118d0891d48d3df3fef" id=8509da57-d478-4835-868e-c68ac1ded682 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	May 28 22:27:44 old-k8s-version-137556 crio[625]: time="2024-05-28 22:27:44.369436594Z" level=info msg="Removed container ebdef6f57e7753d65d025f754096305c549444aee4586118d0891d48d3df3fef: kubernetes-dashboard/dashboard-metrics-scraper-8d5bb5db8-p2r9s/dashboard-metrics-scraper" id=8509da57-d478-4835-868e-c68ac1ded682 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	May 28 22:27:48 old-k8s-version-137556 crio[625]: time="2024-05-28 22:27:48.704171596Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=37da0746-bc4c-40c6-b2c2-a96723cea7d3 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:27:48 old-k8s-version-137556 crio[625]: time="2024-05-28 22:27:48.704400316Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=37da0746-bc4c-40c6-b2c2-a96723cea7d3 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:28:00 old-k8s-version-137556 crio[625]: time="2024-05-28 22:28:00.704222025Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=775aa0f8-713d-46db-9a8b-2731a6cb2acc name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:28:00 old-k8s-version-137556 crio[625]: time="2024-05-28 22:28:00.704508738Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=775aa0f8-713d-46db-9a8b-2731a6cb2acc name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:28:12 old-k8s-version-137556 crio[625]: time="2024-05-28 22:28:12.704578924Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=fd07f485-0a85-4632-ad47-d133671002ee name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:28:12 old-k8s-version-137556 crio[625]: time="2024-05-28 22:28:12.704810139Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=fd07f485-0a85-4632-ad47-d133671002ee name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:28:24 old-k8s-version-137556 crio[625]: time="2024-05-28 22:28:24.704202073Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=4ea1a5aa-9006-4b79-bdd6-8561eae39a19 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:28:24 old-k8s-version-137556 crio[625]: time="2024-05-28 22:28:24.704437538Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4ea1a5aa-9006-4b79-bdd6-8561eae39a19 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:28:39 old-k8s-version-137556 crio[625]: time="2024-05-28 22:28:39.704175705Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=3fa0de35-2f17-4525-97ae-4773efde85aa name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:28:39 old-k8s-version-137556 crio[625]: time="2024-05-28 22:28:39.704413746Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=3fa0de35-2f17-4525-97ae-4773efde85aa name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:28:51 old-k8s-version-137556 crio[625]: time="2024-05-28 22:28:51.441679326Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=2acf67e4-bdca-4e71-88a0-2b4a33185d70 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:28:51 old-k8s-version-137556 crio[625]: time="2024-05-28 22:28:51.441924489Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:489397,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2acf67e4-bdca-4e71-88a0-2b4a33185d70 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:28:52 old-k8s-version-137556 crio[625]: time="2024-05-28 22:28:52.704308698Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=22d55952-e529-4206-8d10-33c45dbe03d7 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:28:52 old-k8s-version-137556 crio[625]: time="2024-05-28 22:28:52.704527745Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=22d55952-e529-4206-8d10-33c45dbe03d7 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:29:04 old-k8s-version-137556 crio[625]: time="2024-05-28 22:29:04.704359437Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=87956884-873e-4a58-9790-a8a2db8c6b8f name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:29:04 old-k8s-version-137556 crio[625]: time="2024-05-28 22:29:04.704609588Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=87956884-873e-4a58-9790-a8a2db8c6b8f name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:29:18 old-k8s-version-137556 crio[625]: time="2024-05-28 22:29:18.704219952Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=39abebf7-5550-444d-b3e1-8b790d0ee0a9 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:29:18 old-k8s-version-137556 crio[625]: time="2024-05-28 22:29:18.704461579Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=39abebf7-5550-444d-b3e1-8b790d0ee0a9 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:29:29 old-k8s-version-137556 crio[625]: time="2024-05-28 22:29:29.704233490Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=be79a416-203f-4ec5-bd0e-f71c448ce9e9 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:29:29 old-k8s-version-137556 crio[625]: time="2024-05-28 22:29:29.704468060Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=be79a416-203f-4ec5-bd0e-f71c448ce9e9 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:29:42 old-k8s-version-137556 crio[625]: time="2024-05-28 22:29:42.704241825Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=f9abf525-53f6-4e56-beb1-124861fb66d3 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 28 22:29:42 old-k8s-version-137556 crio[625]: time="2024-05-28 22:29:42.704499164Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=f9abf525-53f6-4e56-beb1-124861fb66d3 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	9c6d3e2adbe78       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           2 minutes ago       Exited              dashboard-metrics-scraper   5                   bf431ccc20423       dashboard-metrics-scraper-8d5bb5db8-p2r9s
	4c549161f1119       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           5 minutes ago       Running             storage-provisioner         1                   e76f6651fd7f8       storage-provisioner
	bd823ba62bc36       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   5 minutes ago       Running             kubernetes-dashboard        0                   7240af405d932       kubernetes-dashboard-cd95d586-8fm5h
	d0d1f2d0a469d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           5 minutes ago       Running             busybox                     0                   942d1ccda4bb1       busybox
	cfa4cf6ca058c       db91994f4ee8f894a1e8a6c1a76f615da8fc3c019300a3686291ce6fcbc57895                                           5 minutes ago       Running             coredns                     0                   6310403df32fa       coredns-74ff55c5b-rkctv
	05623e44fa29d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           5 minutes ago       Exited              storage-provisioner         0                   e76f6651fd7f8       storage-provisioner
	7b6e2d0457829       25a5233254979d0678a2db1d15b76b73dc380d81bc5eed93916ba5638b3cd894                                           5 minutes ago       Running             kube-proxy                  0                   14802998d0f83       kube-proxy-8jz6w
	dd01bfbb0358d       89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40                                           5 minutes ago       Running             kindnet-cni                 0                   51500ce31d489       kindnet-p94zm
	9ac42df22be82       2c08bbbc02d3aa5dfbf4e79f15c0a61424049288917aa10364464ca1f7de7157                                           5 minutes ago       Running             kube-apiserver              0                   5e814ea5b33b5       kube-apiserver-old-k8s-version-137556
	c6104b8b4482d       05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28                                           5 minutes ago       Running             etcd                        0                   7c44d1765ed5b       etcd-old-k8s-version-137556
	41bbe9eb76e24       1df8a2b116bd16f7070fd383a6769c8d644b365575e8ffa3e492b84e4f05fc74                                           5 minutes ago       Running             kube-controller-manager     0                   81d9e66b117bd       kube-controller-manager-old-k8s-version-137556
	b431299048367       e7605f88f17d6a4c3f083ef9c6f5f19b39f87e4d4406a05a8612b54a6ea57051                                           5 minutes ago       Running             kube-scheduler              0                   f22d367b8975b       kube-scheduler-old-k8s-version-137556
	
	
	==> coredns [cfa4cf6ca058c1565fcdfd433992ad14f91256b5d1fce4f4d2a0e7f68b1a683a] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:43030 - 41591 "HINFO IN 2352742964943992761.5656879409533965859. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013668915s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:42507 - 57517 "HINFO IN 229562044839275671.5753682441334951666. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02204632s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0528 22:24:36.006932       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-05-28 22:24:05.999342715 +0000 UTC m=+0.099054401) (total time: 30.007210072s):
	Trace[1427131847]: [30.007210072s] [30.007210072s] END
	E0528 22:24:36.006976       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0528 22:24:36.007119       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-05-28 22:24:05.998812562 +0000 UTC m=+0.098524256) (total time: 30.007616092s):
	Trace[2019727887]: [30.007616092s] [30.007616092s] END
	E0528 22:24:36.007129       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0528 22:24:36.007210       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-05-28 22:24:05.999603582 +0000 UTC m=+0.099315268) (total time: 30.007115551s):
	Trace[939984059]: [30.007115551s] [30.007115551s] END
	E0528 22:24:36.007214       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-137556
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-137556
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=old-k8s-version-137556
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T22_21_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 22:21:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-137556
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 22:29:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 22:24:54 +0000   Tue, 28 May 2024 22:21:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 22:24:54 +0000   Tue, 28 May 2024 22:21:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 22:24:54 +0000   Tue, 28 May 2024 22:21:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 22:24:54 +0000   Tue, 28 May 2024 22:21:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-137556
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022436Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022436Ki
	  pods:               110
	System Info:
	  Machine ID:                 14aafd46ac8547e9800fd0b239a4e2b6
	  System UUID:                6556a3cc-e8a0-4e10-bbe8-bc825fb4ce55
	  Boot ID:                    2882d43f-5a85-456c-aec3-876199af1cc0
	  Kernel Version:             5.15.0-1062-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 coredns-74ff55c5b-rkctv                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m10s
	  kube-system                 etcd-old-k8s-version-137556                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m17s
	  kube-system                 kindnet-p94zm                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m10s
	  kube-system                 kube-apiserver-old-k8s-version-137556             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	  kube-system                 kube-controller-manager-old-k8s-version-137556    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	  kube-system                 kube-proxy-8jz6w                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 kube-scheduler-old-k8s-version-137556             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	  kube-system                 metrics-server-9975d5f86-h5vcp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m23s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-p2r9s         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-8fm5h               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m37s (x5 over 8m37s)  kubelet     Node old-k8s-version-137556 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m37s (x4 over 8m37s)  kubelet     Node old-k8s-version-137556 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m37s (x4 over 8m37s)  kubelet     Node old-k8s-version-137556 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m18s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m17s                  kubelet     Node old-k8s-version-137556 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m17s                  kubelet     Node old-k8s-version-137556 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m17s                  kubelet     Node old-k8s-version-137556 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m8s                   kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                7m57s                  kubelet     Node old-k8s-version-137556 status is now: NodeReady
	  Normal  Starting                 5m54s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m54s (x8 over 5m54s)  kubelet     Node old-k8s-version-137556 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m54s (x8 over 5m54s)  kubelet     Node old-k8s-version-137556 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m54s (x8 over 5m54s)  kubelet     Node old-k8s-version-137556 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m39s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000809] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001129] FS-Cache: N-cookie d=000000000847e55d{9p.inode} n=000000003d37d697
	[  +0.001063] FS-Cache: N-key=[8] '02d8c90000000000'
	[  +0.004281] FS-Cache: Duplicate cookie detected
	[  +0.000738] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001015] FS-Cache: O-cookie d=000000000847e55d{9p.inode} n=00000000bc6509f8
	[  +0.001137] FS-Cache: O-key=[8] '02d8c90000000000'
	[  +0.000764] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000912] FS-Cache: N-cookie d=000000000847e55d{9p.inode} n=000000000017120e
	[  +0.001082] FS-Cache: N-key=[8] '02d8c90000000000'
	[  +2.267064] FS-Cache: Duplicate cookie detected
	[  +0.001056] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001170] FS-Cache: O-cookie d=000000000847e55d{9p.inode} n=00000000aa2a29a4
	[  +0.001168] FS-Cache: O-key=[8] '01d8c90000000000'
	[  +0.000769] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001030] FS-Cache: N-cookie d=000000000847e55d{9p.inode} n=000000003d37d697
	[  +0.001300] FS-Cache: N-key=[8] '01d8c90000000000'
	[  +0.392949] FS-Cache: Duplicate cookie detected
	[  +0.000791] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000981] FS-Cache: O-cookie d=000000000847e55d{9p.inode} n=00000000dae7ba57
	[  +0.001205] FS-Cache: O-key=[8] '07d8c90000000000'
	[  +0.000689] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000911] FS-Cache: N-cookie d=000000000847e55d{9p.inode} n=0000000086e4d09a
	[  +0.001030] FS-Cache: N-key=[8] '07d8c90000000000'
	[May28 22:16] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [c6104b8b4482d109a66493c71b372abfe5cfac7f2c526736c072a385dc7798f8] <==
	2024-05-28 22:25:43.028527 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:25:53.028348 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:26:03.028513 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:26:13.028464 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:26:23.028415 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:26:33.028659 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:26:43.028496 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:26:53.028506 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:27:03.028592 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:27:13.028411 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:27:23.028531 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:27:33.028462 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:27:43.028436 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:27:53.028558 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:28:03.028443 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:28:13.028570 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:28:23.028453 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:28:33.028428 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:28:43.028456 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:28:53.028587 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:29:03.028456 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:29:13.028473 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:29:23.028675 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:29:33.029276 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:29:43.028529 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 22:29:45 up  6:12,  0 users,  load average: 0.50, 1.63, 2.27
	Linux old-k8s-version-137556 5.15.0-1062-aws #68~20.04.1-Ubuntu SMP Tue May 7 11:50:35 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [dd01bfbb0358d6cf9694d13afcf99a9691a35a5e806521ed56465425c701b2e5] <==
	I0528 22:27:35.586771       1 main.go:227] handling current node
	I0528 22:27:45.600921       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0528 22:27:45.600952       1 main.go:227] handling current node
	I0528 22:27:55.606703       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0528 22:27:55.606732       1 main.go:227] handling current node
	I0528 22:28:05.612232       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0528 22:28:05.612261       1 main.go:227] handling current node
	I0528 22:28:15.625686       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0528 22:28:15.625713       1 main.go:227] handling current node
	I0528 22:28:25.640436       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0528 22:28:25.640467       1 main.go:227] handling current node
	I0528 22:28:35.645672       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0528 22:28:35.645705       1 main.go:227] handling current node
	I0528 22:28:45.658407       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0528 22:28:45.658435       1 main.go:227] handling current node
	I0528 22:28:55.672137       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0528 22:28:55.672165       1 main.go:227] handling current node
	I0528 22:29:05.764618       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0528 22:29:05.766117       1 main.go:227] handling current node
	I0528 22:29:15.848444       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0528 22:29:15.848472       1 main.go:227] handling current node
	I0528 22:29:25.950911       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0528 22:29:25.950942       1 main.go:227] handling current node
	I0528 22:29:36.024814       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0528 22:29:36.024847       1 main.go:227] handling current node
	
	
	==> kube-apiserver [9ac42df22be82ca66d9f1b421375548c94bd4989289312d40efe43edfffb28c3] <==
	I0528 22:26:18.244564       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0528 22:26:18.244572       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0528 22:26:59.406999       1 client.go:360] parsed scheme: "passthrough"
	I0528 22:26:59.407044       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0528 22:26:59.407052       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0528 22:27:07.579516       1 handler_proxy.go:102] no RequestInfo found in the context
	E0528 22:27:07.579683       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:27:07.579735       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0528 22:27:41.490722       1 client.go:360] parsed scheme: "passthrough"
	I0528 22:27:41.490766       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0528 22:27:41.490774       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0528 22:28:19.555374       1 client.go:360] parsed scheme: "passthrough"
	I0528 22:28:19.555420       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0528 22:28:19.555430       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0528 22:28:50.981475       1 client.go:360] parsed scheme: "passthrough"
	I0528 22:28:50.981533       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0528 22:28:50.981541       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0528 22:29:04.395639       1 handler_proxy.go:102] no RequestInfo found in the context
	E0528 22:29:04.395713       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:29:04.395723       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0528 22:29:25.536438       1 client.go:360] parsed scheme: "passthrough"
	I0528 22:29:25.536482       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0528 22:29:25.536491       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [41bbe9eb76e24222f16e08bedb27572c1a1f23de7c322144f5edad4a3849ca29] <==
	E0528 22:25:24.366218       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:25:27.986224       1 request.go:655] Throttling request took 1.048444155s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0528 22:25:28.837739       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:25:54.868469       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:26:00.488147       1 request.go:655] Throttling request took 1.048446419s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0528 22:26:01.339744       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:26:25.370770       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:26:32.990139       1 request.go:655] Throttling request took 1.048400455s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0528 22:26:33.841533       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:26:55.872792       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:27:05.491912       1 request.go:655] Throttling request took 1.048342159s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0528 22:27:06.343465       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:27:26.380824       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:27:37.993817       1 request.go:655] Throttling request took 1.048402416s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0528 22:27:38.845302       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:27:56.883455       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:28:10.495689       1 request.go:655] Throttling request took 1.048377021s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0528 22:28:11.347146       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:28:27.385352       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:28:42.997681       1 request.go:655] Throttling request took 1.048235486s, request: GET:https://192.168.85.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W0528 22:28:43.849096       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:28:57.887153       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:29:15.499666       1 request.go:655] Throttling request took 1.048299206s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0528 22:29:16.351149       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:29:28.388829       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [7b6e2d0457829e44adcacbd8351453a6bae452df959fbf6234bf0632a810909b] <==
	I0528 22:21:36.987864       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0528 22:21:36.988177       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0528 22:21:37.011026       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0528 22:21:37.011261       1 server_others.go:185] Using iptables Proxier.
	I0528 22:21:37.011554       1 server.go:650] Version: v1.20.0
	I0528 22:21:37.012359       1 config.go:315] Starting service config controller
	I0528 22:21:37.012430       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0528 22:21:37.012489       1 config.go:224] Starting endpoint slice config controller
	I0528 22:21:37.012532       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0528 22:21:37.112582       1 shared_informer.go:247] Caches are synced for service config 
	I0528 22:21:37.112719       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0528 22:24:06.737367       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0528 22:24:06.737610       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0528 22:24:06.808607       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0528 22:24:06.808821       1 server_others.go:185] Using iptables Proxier.
	I0528 22:24:06.809108       1 server.go:650] Version: v1.20.0
	I0528 22:24:06.824713       1 config.go:315] Starting service config controller
	I0528 22:24:06.824814       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0528 22:24:06.824908       1 config.go:224] Starting endpoint slice config controller
	I0528 22:24:06.833337       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0528 22:24:06.924962       1 shared_informer.go:247] Caches are synced for service config 
	I0528 22:24:06.933548       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [b43129904836783c320b181b2b3d5e9b5752b3373a3d3cdfe053173d601b7105] <==
	E0528 22:21:16.498638       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 22:21:16.498729       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0528 22:21:16.498804       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 22:21:16.501910       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 22:21:16.502178       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 22:21:16.502549       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 22:21:16.502643       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 22:21:17.356700       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0528 22:21:17.412974       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 22:21:17.483579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 22:21:17.484784       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 22:21:17.498248       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 22:21:17.508923       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 22:21:17.534460       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0528 22:21:19.956101       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0528 22:23:57.458752       1 serving.go:331] Generated self-signed cert in-memory
	W0528 22:24:03.178274       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 22:24:03.178413       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 22:24:03.178428       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 22:24:03.178435       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 22:24:03.755299       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0528 22:24:03.764400       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 22:24:03.764431       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 22:24:03.764456       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0528 22:24:04.079468       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	May 28 22:28:12 old-k8s-version-137556 kubelet[738]: I0528 22:28:12.703884     738 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9c6d3e2adbe78524504e484ae9691bb7b6d52f34a4ca64dd2e3f201d3598ea79
	May 28 22:28:12 old-k8s-version-137556 kubelet[738]: E0528 22:28:12.705029     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:28:12 old-k8s-version-137556 kubelet[738]: E0528 22:28:12.705742     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	May 28 22:28:24 old-k8s-version-137556 kubelet[738]: E0528 22:28:24.704663     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:28:26 old-k8s-version-137556 kubelet[738]: I0528 22:28:26.703823     738 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9c6d3e2adbe78524504e484ae9691bb7b6d52f34a4ca64dd2e3f201d3598ea79
	May 28 22:28:26 old-k8s-version-137556 kubelet[738]: E0528 22:28:26.704162     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	May 28 22:28:39 old-k8s-version-137556 kubelet[738]: E0528 22:28:39.704787     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:28:40 old-k8s-version-137556 kubelet[738]: I0528 22:28:40.703884     738 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9c6d3e2adbe78524504e484ae9691bb7b6d52f34a4ca64dd2e3f201d3598ea79
	May 28 22:28:40 old-k8s-version-137556 kubelet[738]: E0528 22:28:40.704255     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	May 28 22:28:51 old-k8s-version-137556 kubelet[738]: E0528 22:28:51.616331     738 container_manager_linux.go:533] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/34fd7d98b135c1134fea5ddc99ba08b42735c00cc3f7217c47fa37b68a70a592, memory: /docker/34fd7d98b135c1134fea5ddc99ba08b42735c00cc3f7217c47fa37b68a70a592/system.slice/kubelet.service
	May 28 22:28:52 old-k8s-version-137556 kubelet[738]: I0528 22:28:52.703870     738 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9c6d3e2adbe78524504e484ae9691bb7b6d52f34a4ca64dd2e3f201d3598ea79
	May 28 22:28:52 old-k8s-version-137556 kubelet[738]: E0528 22:28:52.704205     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	May 28 22:28:52 old-k8s-version-137556 kubelet[738]: E0528 22:28:52.705255     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:29:04 old-k8s-version-137556 kubelet[738]: E0528 22:29:04.704842     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:29:07 old-k8s-version-137556 kubelet[738]: I0528 22:29:07.703951     738 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9c6d3e2adbe78524504e484ae9691bb7b6d52f34a4ca64dd2e3f201d3598ea79
	May 28 22:29:07 old-k8s-version-137556 kubelet[738]: E0528 22:29:07.704319     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	May 28 22:29:18 old-k8s-version-137556 kubelet[738]: E0528 22:29:18.704693     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:29:20 old-k8s-version-137556 kubelet[738]: I0528 22:29:20.703841     738 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9c6d3e2adbe78524504e484ae9691bb7b6d52f34a4ca64dd2e3f201d3598ea79
	May 28 22:29:20 old-k8s-version-137556 kubelet[738]: E0528 22:29:20.704192     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	May 28 22:29:29 old-k8s-version-137556 kubelet[738]: E0528 22:29:29.704635     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:29:34 old-k8s-version-137556 kubelet[738]: I0528 22:29:34.703808     738 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9c6d3e2adbe78524504e484ae9691bb7b6d52f34a4ca64dd2e3f201d3598ea79
	May 28 22:29:34 old-k8s-version-137556 kubelet[738]: E0528 22:29:34.704147     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	May 28 22:29:42 old-k8s-version-137556 kubelet[738]: E0528 22:29:42.704958     738 pod_workers.go:191] Error syncing pod 7bd3caf6-43e5-49dc-a226-fcdaf2249bff ("metrics-server-9975d5f86-h5vcp_kube-system(7bd3caf6-43e5-49dc-a226-fcdaf2249bff)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:29:45 old-k8s-version-137556 kubelet[738]: I0528 22:29:45.704985     738 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9c6d3e2adbe78524504e484ae9691bb7b6d52f34a4ca64dd2e3f201d3598ea79
	May 28 22:29:45 old-k8s-version-137556 kubelet[738]: E0528 22:29:45.705976     738 pod_workers.go:191] Error syncing pod 52ebbcd6-6bc2-4840-81e6-3a68800bf50f ("dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-p2r9s_kubernetes-dashboard(52ebbcd6-6bc2-4840-81e6-3a68800bf50f)"
	
	
	==> kubernetes-dashboard [bd823ba62bc36515af73815b13f0095eafdd2fc56df1f3d22fcb9af68a5fd527] <==
	2024/05/28 22:24:30 Using namespace: kubernetes-dashboard
	2024/05/28 22:24:30 Using in-cluster config to connect to apiserver
	2024/05/28 22:24:30 Using secret token for csrf signing
	2024/05/28 22:24:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/05/28 22:24:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/05/28 22:24:30 Successful initial request to the apiserver, version: v1.20.0
	2024/05/28 22:24:30 Generating JWE encryption key
	2024/05/28 22:24:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/05/28 22:24:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/05/28 22:24:32 Initializing JWE encryption key from synchronized object
	2024/05/28 22:24:32 Creating in-cluster Sidecar client
	2024/05/28 22:24:32 Serving insecurely on HTTP port: 9090
	2024/05/28 22:24:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:25:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:25:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:26:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:26:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:27:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:27:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:28:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:28:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:29:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:29:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:24:30 Starting overwatch
	
	
	==> storage-provisioner [05623e44fa29d497e3027c10fedf174172e73932779e0845b16168bfb8528998] <==
	I0528 22:21:53.158252       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 22:21:53.197182       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 22:21:53.197439       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 22:21:53.242070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 22:21:53.242764       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-137556_868d864f-3eab-4c41-b18d-bb027c72ee63!
	I0528 22:21:53.242614       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fb0c04ff-3f43-492c-b399-fb03b1ba22b1", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-137556_868d864f-3eab-4c41-b18d-bb027c72ee63 became leader
	I0528 22:21:53.343442       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-137556_868d864f-3eab-4c41-b18d-bb027c72ee63!
	I0528 22:24:06.585477       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0528 22:24:36.590194       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [4c549161f111967cb616fe3ed8c91bce54b8269bcb2565d06539abb210ddfc33] <==
	I0528 22:24:37.259057       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 22:24:37.275049       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 22:24:37.275109       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 22:24:54.734279       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 22:24:54.734675       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-137556_12fd02f4-7b11-45e7-801c-ad3f1194016f!
	I0528 22:24:54.735304       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fb0c04ff-3f43-492c-b399-fb03b1ba22b1", APIVersion:"v1", ResourceVersion:"819", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-137556_12fd02f4-7b11-45e7-801c-ad3f1194016f became leader
	I0528 22:24:54.835782       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-137556_12fd02f4-7b11-45e7-801c-ad3f1194016f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-137556 -n old-k8s-version-137556
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-137556 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-h5vcp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-137556 describe pod metrics-server-9975d5f86-h5vcp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-137556 describe pod metrics-server-9975d5f86-h5vcp: exit status 1 (94.222143ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-h5vcp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-137556 describe pod metrics-server-9975d5f86-h5vcp: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (372.69s)

                                                
                                    

Test pass (295/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 230.96
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.1/json-events 7.88
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.07
18 TestDownloadOnly/v1.30.1/DeleteAll 0.19
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.55
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
27 TestAddons/Setup 203.55
29 TestAddons/parallel/Registry 15.98
31 TestAddons/parallel/InspektorGadget 11.77
35 TestAddons/parallel/CSI 47.24
36 TestAddons/parallel/Headlamp 11.97
37 TestAddons/parallel/CloudSpanner 6.54
38 TestAddons/parallel/LocalPath 53.23
39 TestAddons/parallel/NvidiaDevicePlugin 6.49
40 TestAddons/parallel/Yakd 6.01
44 TestAddons/serial/GCPAuth/Namespaces 0.17
45 TestAddons/StoppedEnableDisable 12.16
46 TestCertOptions 42.19
47 TestCertExpiration 236.93
49 TestForceSystemdFlag 36.78
50 TestForceSystemdEnv 46.12
56 TestErrorSpam/setup 30.11
57 TestErrorSpam/start 0.68
58 TestErrorSpam/status 0.95
59 TestErrorSpam/pause 1.88
60 TestErrorSpam/unpause 1.74
61 TestErrorSpam/stop 1.42
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 49.42
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 31.61
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.11
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.59
73 TestFunctional/serial/CacheCmd/cache/add_local 1.15
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.94
78 TestFunctional/serial/CacheCmd/cache/delete 0.13
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 35
82 TestFunctional/serial/ComponentHealth 0.11
83 TestFunctional/serial/LogsCmd 1.64
84 TestFunctional/serial/LogsFileCmd 1.69
85 TestFunctional/serial/InvalidService 4.72
87 TestFunctional/parallel/ConfigCmd 0.47
88 TestFunctional/parallel/DashboardCmd 11.77
89 TestFunctional/parallel/DryRun 0.43
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 1.02
95 TestFunctional/parallel/ServiceCmdConnect 11.7
96 TestFunctional/parallel/AddonsCmd 0.17
97 TestFunctional/parallel/PersistentVolumeClaim 27.25
99 TestFunctional/parallel/SSHCmd 0.64
100 TestFunctional/parallel/CpCmd 2.21
102 TestFunctional/parallel/FileSync 0.25
103 TestFunctional/parallel/CertSync 1.57
107 TestFunctional/parallel/NodeLabels 0.15
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
111 TestFunctional/parallel/License 0.33
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.51
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.21
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
125 TestFunctional/parallel/ServiceCmd/List 0.58
126 TestFunctional/parallel/ProfileCmd/profile_list 0.46
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
130 TestFunctional/parallel/MountCmd/any-port 6.79
131 TestFunctional/parallel/ServiceCmd/Format 0.47
132 TestFunctional/parallel/ServiceCmd/URL 0.45
133 TestFunctional/parallel/MountCmd/specific-port 2.68
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.54
135 TestFunctional/parallel/Version/short 0.05
136 TestFunctional/parallel/Version/components 1.04
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.7
142 TestFunctional/parallel/ImageCommands/Setup 5.15
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.48
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.99
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.41
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.86
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.23
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.96
153 TestFunctional/delete_addon-resizer_images 0.08
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 165.68
160 TestMultiControlPlane/serial/DeployApp 6.29
161 TestMultiControlPlane/serial/PingHostFromPods 1.66
162 TestMultiControlPlane/serial/AddWorkerNode 24.29
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.7
165 TestMultiControlPlane/serial/CopyFile 18.36
166 TestMultiControlPlane/serial/StopSecondaryNode 12.65
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.52
168 TestMultiControlPlane/serial/RestartSecondaryNode 22.81
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.61
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 193.38
171 TestMultiControlPlane/serial/DeleteSecondaryNode 13.27
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.5
173 TestMultiControlPlane/serial/StopCluster 35.74
174 TestMultiControlPlane/serial/RestartCluster 95.88
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.51
176 TestMultiControlPlane/serial/AddSecondaryNode 63.15
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.73
181 TestJSONOutput/start/Command 51.39
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.71
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.64
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.81
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
206 TestKicCustomNetwork/create_custom_network 40.62
207 TestKicCustomNetwork/use_default_bridge_network 33.76
208 TestKicExistingNetwork 34.63
209 TestKicCustomSubnet 33.25
210 TestKicStaticIP 33.13
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 74.54
215 TestMountStart/serial/StartWithMountFirst 9.14
216 TestMountStart/serial/VerifyMountFirst 0.26
217 TestMountStart/serial/StartWithMountSecond 6.25
218 TestMountStart/serial/VerifyMountSecond 0.25
219 TestMountStart/serial/DeleteFirst 1.59
220 TestMountStart/serial/VerifyMountPostDelete 0.25
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.52
223 TestMountStart/serial/VerifyMountPostStop 0.25
226 TestMultiNode/serial/FreshStart2Nodes 67.01
227 TestMultiNode/serial/DeployApp2Nodes 4.61
228 TestMultiNode/serial/PingHostFrom2Pods 0.96
229 TestMultiNode/serial/AddNode 22.43
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.3
232 TestMultiNode/serial/CopyFile 9.83
233 TestMultiNode/serial/StopNode 2.22
234 TestMultiNode/serial/StartAfterStop 9.61
235 TestMultiNode/serial/RestartKeepsNodes 90.07
236 TestMultiNode/serial/DeleteNode 5.12
237 TestMultiNode/serial/StopMultiNode 23.78
238 TestMultiNode/serial/RestartMultiNode 55
239 TestMultiNode/serial/ValidateNameConflict 34.41
244 TestPreload 116.09
246 TestScheduledStopUnix 108.55
249 TestInsufficientStorage 10.58
250 TestRunningBinaryUpgrade 66.14
252 TestKubernetesUpgrade 389.11
253 TestMissingContainerUpgrade 147.42
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 40.49
257 TestNoKubernetes/serial/StartWithStopK8s 7.64
258 TestNoKubernetes/serial/Start 9.48
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
260 TestNoKubernetes/serial/ProfileList 8.34
261 TestNoKubernetes/serial/Stop 1.21
262 TestNoKubernetes/serial/StartNoArgs 7.01
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
264 TestStoppedBinaryUpgrade/Setup 1.09
265 TestStoppedBinaryUpgrade/Upgrade 70.47
266 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
275 TestPause/serial/Start 49.14
276 TestPause/serial/SecondStartNoReconfiguration 17.97
277 TestPause/serial/Pause 0.8
278 TestPause/serial/VerifyStatus 0.31
279 TestPause/serial/Unpause 0.64
280 TestPause/serial/PauseAgain 0.86
281 TestPause/serial/DeletePaused 2.64
282 TestPause/serial/VerifyDeletedResources 14.44
290 TestNetworkPlugins/group/false 4.52
295 TestStartStop/group/old-k8s-version/serial/FirstStart 167.9
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.54
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.8
298 TestStartStop/group/old-k8s-version/serial/Stop 12.71
300 TestStartStop/group/no-preload/serial/FirstStart 68.77
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
303 TestStartStop/group/no-preload/serial/DeployApp 8.34
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
305 TestStartStop/group/no-preload/serial/Stop 12.04
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
307 TestStartStop/group/no-preload/serial/SecondStart 296.46
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
312 TestStartStop/group/old-k8s-version/serial/Pause 3.41
313 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
315 TestStartStop/group/embed-certs/serial/FirstStart 63.05
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
317 TestStartStop/group/no-preload/serial/Pause 4.07
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.88
320 TestStartStop/group/embed-certs/serial/DeployApp 9.36
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
323 TestStartStop/group/embed-certs/serial/Stop 11.96
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.97
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
327 TestStartStop/group/embed-certs/serial/SecondStart 302.56
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 300.84
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
331 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.12
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
335 TestStartStop/group/embed-certs/serial/Pause 3.46
336 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
337 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.8
339 TestStartStop/group/newest-cni/serial/FirstStart 57.52
340 TestNetworkPlugins/group/auto/Start 61.62
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.73
343 TestStartStop/group/newest-cni/serial/Stop 1.32
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
345 TestStartStop/group/newest-cni/serial/SecondStart 17.75
346 TestNetworkPlugins/group/auto/KubeletFlags 0.28
347 TestNetworkPlugins/group/auto/NetCatPod 11.26
348 TestNetworkPlugins/group/auto/DNS 0.31
349 TestNetworkPlugins/group/auto/Localhost 0.24
350 TestNetworkPlugins/group/auto/HairPin 0.23
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
354 TestStartStop/group/newest-cni/serial/Pause 2.89
355 TestNetworkPlugins/group/kindnet/Start 55.12
356 TestNetworkPlugins/group/calico/Start 78.4
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
359 TestNetworkPlugins/group/kindnet/NetCatPod 10.28
360 TestNetworkPlugins/group/kindnet/DNS 0.22
361 TestNetworkPlugins/group/kindnet/Localhost 0.19
362 TestNetworkPlugins/group/kindnet/HairPin 0.18
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/custom-flannel/Start 71.47
365 TestNetworkPlugins/group/calico/KubeletFlags 0.31
366 TestNetworkPlugins/group/calico/NetCatPod 12.33
367 TestNetworkPlugins/group/calico/DNS 0.32
368 TestNetworkPlugins/group/calico/Localhost 0.23
369 TestNetworkPlugins/group/calico/HairPin 0.19
370 TestNetworkPlugins/group/enable-default-cni/Start 88.16
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.42
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.38
373 TestNetworkPlugins/group/custom-flannel/DNS 0.32
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
376 TestNetworkPlugins/group/flannel/Start 71.62
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.35
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
382 TestNetworkPlugins/group/bridge/Start 91.1
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
385 TestNetworkPlugins/group/flannel/NetCatPod 14.37
386 TestNetworkPlugins/group/flannel/DNS 0.28
387 TestNetworkPlugins/group/flannel/Localhost 0.24
388 TestNetworkPlugins/group/flannel/HairPin 0.21
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
390 TestNetworkPlugins/group/bridge/NetCatPod 9.25
391 TestNetworkPlugins/group/bridge/DNS 0.21
392 TestNetworkPlugins/group/bridge/Localhost 0.16
393 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (230.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-064906 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-064906 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3m50.960267688s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (230.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-064906
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-064906: exit status 85 (70.482398ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-064906 | jenkins | v1.33.1 | 28 May 24 21:27 UTC |          |
	|         | -p download-only-064906        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:27:03
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:27:03.595452 1355202 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:27:03.595587 1355202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:27:03.595598 1355202 out.go:304] Setting ErrFile to fd 2...
	I0528 21:27:03.595603 1355202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:27:03.595845 1355202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
	W0528 21:27:03.595977 1355202 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18966-1349783/.minikube/config/config.json: open /home/jenkins/minikube-integration/18966-1349783/.minikube/config/config.json: no such file or directory
	I0528 21:27:03.596370 1355202 out.go:298] Setting JSON to true
	I0528 21:27:03.597250 1355202 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":18572,"bootTime":1716913052,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0528 21:27:03.597324 1355202 start.go:139] virtualization:  
	I0528 21:27:03.600074 1355202 out.go:97] [download-only-064906] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0528 21:27:03.600275 1355202 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/preloaded-tarball: no such file or directory
	I0528 21:27:03.600392 1355202 notify.go:220] Checking for updates...
	I0528 21:27:03.603217 1355202 out.go:169] MINIKUBE_LOCATION=18966
	I0528 21:27:03.605233 1355202 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:27:03.607081 1355202 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	I0528 21:27:03.609293 1355202 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	I0528 21:27:03.611449 1355202 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0528 21:27:03.616442 1355202 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0528 21:27:03.616699 1355202 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:27:03.637563 1355202 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 21:27:03.637668 1355202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:27:03.705109 1355202 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-28 21:27:03.695603187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:27:03.705214 1355202 docker.go:295] overlay module found
	I0528 21:27:03.707145 1355202 out.go:97] Using the docker driver based on user configuration
	I0528 21:27:03.707171 1355202 start.go:297] selected driver: docker
	I0528 21:27:03.707178 1355202 start.go:901] validating driver "docker" against <nil>
	I0528 21:27:03.707290 1355202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:27:03.760258 1355202 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-28 21:27:03.7514313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarch
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:27:03.760420 1355202 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 21:27:03.760712 1355202 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0528 21:27:03.760867 1355202 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0528 21:27:03.763092 1355202 out.go:169] Using Docker driver with root privileges
	I0528 21:27:03.764891 1355202 cni.go:84] Creating CNI manager for ""
	I0528 21:27:03.764912 1355202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0528 21:27:03.764922 1355202 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0528 21:27:03.765009 1355202 start.go:340] cluster config:
	{Name:download-only-064906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-064906 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:27:03.767387 1355202 out.go:97] Starting "download-only-064906" primary control-plane node in "download-only-064906" cluster
	I0528 21:27:03.767407 1355202 cache.go:121] Beginning downloading kic base image for docker with crio
	I0528 21:27:03.769296 1355202 out.go:97] Pulling base image v0.0.44-1716228441-18934 ...
	I0528 21:27:03.769320 1355202 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0528 21:27:03.769418 1355202 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon
	I0528 21:27:03.782955 1355202 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 to local cache
	I0528 21:27:03.783601 1355202 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local cache directory
	I0528 21:27:03.783716 1355202 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 to local cache
	I0528 21:27:03.854086 1355202 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0528 21:27:03.854125 1355202 cache.go:56] Caching tarball of preloaded images
	I0528 21:27:03.854282 1355202 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0528 21:27:03.856984 1355202 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0528 21:27:03.857003 1355202 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0528 21:27:04.150082 1355202 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0528 21:27:14.483767 1355202 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 as a tarball
	I0528 21:30:52.512426 1355202 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0528 21:30:52.512559 1355202 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-064906 host does not exist
	  To start a cluster, run: "minikube start -p download-only-064906"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-064906
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (7.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-723265 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-723265 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.878985325s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (7.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-723265
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-723265: exit status 85 (72.084524ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-064906 | jenkins | v1.33.1 | 28 May 24 21:27 UTC |                     |
	|         | -p download-only-064906        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 28 May 24 21:30 UTC | 28 May 24 21:30 UTC |
	| delete  | -p download-only-064906        | download-only-064906 | jenkins | v1.33.1 | 28 May 24 21:30 UTC | 28 May 24 21:30 UTC |
	| start   | -o=json --download-only        | download-only-723265 | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | -p download-only-723265        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:30:54
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:30:54.935200 1355375 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:30:54.935339 1355375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:30:54.935349 1355375 out.go:304] Setting ErrFile to fd 2...
	I0528 21:30:54.935353 1355375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:30:54.935601 1355375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
	I0528 21:30:54.936029 1355375 out.go:298] Setting JSON to true
	I0528 21:30:54.936867 1355375 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":18803,"bootTime":1716913052,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0528 21:30:54.936944 1355375 start.go:139] virtualization:  
	I0528 21:30:54.939419 1355375 out.go:97] [download-only-723265] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0528 21:30:54.941327 1355375 out.go:169] MINIKUBE_LOCATION=18966
	I0528 21:30:54.939638 1355375 notify.go:220] Checking for updates...
	I0528 21:30:54.944069 1355375 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:30:54.945940 1355375 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	I0528 21:30:54.947593 1355375 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	I0528 21:30:54.949280 1355375 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0528 21:30:54.953767 1355375 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0528 21:30:54.954169 1355375 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:30:54.974696 1355375 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 21:30:54.974813 1355375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:30:55.045005 1355375 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2024-05-28 21:30:55.030051936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:30:55.045238 1355375 docker.go:295] overlay module found
	I0528 21:30:55.047339 1355375 out.go:97] Using the docker driver based on user configuration
	I0528 21:30:55.047378 1355375 start.go:297] selected driver: docker
	I0528 21:30:55.047386 1355375 start.go:901] validating driver "docker" against <nil>
	I0528 21:30:55.047515 1355375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:30:55.109491 1355375 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2024-05-28 21:30:55.097827498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:30:55.109695 1355375 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 21:30:55.110063 1355375 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0528 21:30:55.110264 1355375 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0528 21:30:55.112473 1355375 out.go:169] Using Docker driver with root privileges
	I0528 21:30:55.114279 1355375 cni.go:84] Creating CNI manager for ""
	I0528 21:30:55.114314 1355375 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0528 21:30:55.114329 1355375 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0528 21:30:55.114430 1355375 start.go:340] cluster config:
	{Name:download-only-723265 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-723265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:30:55.116601 1355375 out.go:97] Starting "download-only-723265" primary control-plane node in "download-only-723265" cluster
	I0528 21:30:55.116645 1355375 cache.go:121] Beginning downloading kic base image for docker with crio
	I0528 21:30:55.118782 1355375 out.go:97] Pulling base image v0.0.44-1716228441-18934 ...
	I0528 21:30:55.118874 1355375 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:30:55.118952 1355375 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon
	I0528 21:30:55.134711 1355375 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 to local cache
	I0528 21:30:55.134855 1355375 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local cache directory
	I0528 21:30:55.134877 1355375 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local cache directory, skipping pull
	I0528 21:30:55.134881 1355375 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 exists in cache, skipping pull
	I0528 21:30:55.134889 1355375 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 as a tarball
	I0528 21:30:55.197643 1355375 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4
	I0528 21:30:55.197677 1355375 cache.go:56] Caching tarball of preloaded images
	I0528 21:30:55.197877 1355375 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:30:55.200242 1355375 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0528 21:30:55.200283 1355375 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4 ...
	I0528 21:30:55.308698 1355375 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:a3311b98134f2386d0a6251840019f9e -> /home/jenkins/minikube-integration/18966-1349783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-723265 host does not exist
	  To start a cluster, run: "minikube start -p download-only-723265"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-723265
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-950707 --alsologtostderr --binary-mirror http://127.0.0.1:38569 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-950707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-950707
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-504712
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-504712: exit status 85 (83.387665ms)

                                                
                                                
-- stdout --
	* Profile "addons-504712" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-504712"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-504712
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-504712: exit status 85 (99.842601ms)

                                                
                                                
-- stdout --
	* Profile "addons-504712" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-504712"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (203.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-504712 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-504712 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m23.547132894s)
--- PASS: TestAddons/Setup (203.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 50.422876ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-gjvvs" [769902e5-f85c-4a07-b2c6-d37f1fb19841] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00544956s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8zzlh" [acd09f12-58ca-45ba-a43a-ccae6df2d939] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004241819s
addons_test.go:342: (dbg) Run:  kubectl --context addons-504712 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-504712 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-504712 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.820652921s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-504712 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-504712 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.98s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xx598" [155e29a1-8a1a-4316-b22e-9378afa18593] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004041665s
addons_test.go:843: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-504712
addons_test.go:843: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-504712: (5.762818817s)
--- PASS: TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 5.64309ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-504712 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/05/28 21:34:43 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:576: (dbg) Run:  kubectl --context addons-504712 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [944cde08-f0ef-4a0b-b17b-18a36e7c68e0] Pending
helpers_test.go:344: "task-pv-pod" [944cde08-f0ef-4a0b-b17b-18a36e7c68e0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [944cde08-f0ef-4a0b-b17b-18a36e7c68e0] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003294196s
addons_test.go:586: (dbg) Run:  kubectl --context addons-504712 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-504712 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-504712 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-504712 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-504712 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-504712 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-504712 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9fd3bcec-71cb-4bdc-838a-12e0c49a660d] Pending
helpers_test.go:344: "task-pv-pod-restore" [9fd3bcec-71cb-4bdc-838a-12e0c49a660d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9fd3bcec-71cb-4bdc-838a-12e0c49a660d] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005168934s
addons_test.go:628: (dbg) Run:  kubectl --context addons-504712 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-504712 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-504712 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-arm64 -p addons-504712 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-arm64 -p addons-504712 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.83549059s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-504712 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-504712 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-48ktv" [6b2fa6f8-bb3c-46da-a446-fcbc98ebc245] Pending
helpers_test.go:344: "headlamp-68456f997b-48ktv" [6b2fa6f8-bb3c-46da-a446-fcbc98ebc245] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-48ktv" [6b2fa6f8-bb3c-46da-a446-fcbc98ebc245] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003607529s
--- PASS: TestAddons/parallel/Headlamp (11.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-4ttqc" [d4e68c8f-c48a-45a1-bc99-c73899fd888f] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003400591s
addons_test.go:862: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-504712
--- PASS: TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-504712 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-504712 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-504712 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3e30ef41-bf11-4596-be5d-e6e5ac0cc31b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3e30ef41-bf11-4596-be5d-e6e5ac0cc31b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3e30ef41-bf11-4596-be5d-e6e5ac0cc31b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003695177s
addons_test.go:992: (dbg) Run:  kubectl --context addons-504712 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-arm64 -p addons-504712 ssh "cat /opt/local-path-provisioner/pvc-077d552a-8848-4a18-94e4-4aa30dc26f1e_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-504712 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-504712 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-arm64 -p addons-504712 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-arm64 -p addons-504712 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.150476085s)
--- PASS: TestAddons/parallel/LocalPath (53.23s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-p6z9d" [5a8692ef-b68d-4ec3-a15c-1c8c61eff11e] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003922626s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-504712
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-bjx67" [aac7ddd7-4f39-4e71-b2f6-88906547d194] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004131377s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-504712 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-504712 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.16s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-504712
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-504712: (11.895815734s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-504712
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-504712
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-504712
--- PASS: TestAddons/StoppedEnableDisable (12.16s)

                                                
                                    
x
+
TestCertOptions (42.19s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-768265 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-768265 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (39.330173408s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-768265 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-768265 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-768265 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-768265" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-768265
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-768265: (2.138351028s)
--- PASS: TestCertOptions (42.19s)

                                                
                                    
x
+
TestCertExpiration (236.93s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-511332 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-511332 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (37.215630143s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-511332 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-511332 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.241619015s)
helpers_test.go:175: Cleaning up "cert-expiration-511332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-511332
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-511332: (2.471942682s)
--- PASS: TestCertExpiration (236.93s)

                                                
                                    
x
+
TestForceSystemdFlag (36.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-189777 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-189777 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.153528955s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-189777 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-189777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-189777
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-189777: (2.33142665s)
--- PASS: TestForceSystemdFlag (36.78s)

                                                
                                    
x
+
TestForceSystemdEnv (46.12s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-959636 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-959636 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.696137623s)
helpers_test.go:175: Cleaning up "force-systemd-env-959636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-959636
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-959636: (4.426426565s)
--- PASS: TestForceSystemdEnv (46.12s)

                                                
                                    
x
+
TestErrorSpam/setup (30.11s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-785546 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-785546 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-785546 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-785546 --driver=docker  --container-runtime=crio: (30.106770394s)
--- PASS: TestErrorSpam/setup (30.11s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (1.88s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 pause
--- PASS: TestErrorSpam/pause (1.88s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 stop: (1.245138054s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-785546 --log_dir /tmp/nospam-785546 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18966-1349783/.minikube/files/etc/test/nested/copy/1355197/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-486060 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-486060 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (49.415980379s)
--- PASS: TestFunctional/serial/StartWithProxy (49.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.61s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-486060 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-486060 --alsologtostderr -v=8: (31.608138023s)
functional_test.go:659: soft start took 31.611733477s for "functional-486060" cluster.
--- PASS: TestFunctional/serial/SoftStart (31.61s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-486060 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-486060 cache add registry.k8s.io/pause:3.1: (1.211180069s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-486060 cache add registry.k8s.io/pause:3.3: (1.194317003s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-486060 cache add registry.k8s.io/pause:latest: (1.185636812s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-486060 /tmp/TestFunctionalserialCacheCmdcacheadd_local820767227/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 cache add minikube-local-cache-test:functional-486060
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 cache delete minikube-local-cache-test:functional-486060
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-486060
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-486060 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.863714ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-486060 cache reload: (1.052861326s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 kubectl -- --context functional-486060 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-486060 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-486060 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-486060 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.99639596s)
functional_test.go:757: restart took 34.996519115s for "functional-486060" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.00s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-486060 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-486060 logs: (1.635191661s)
--- PASS: TestFunctional/serial/LogsCmd (1.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 logs --file /tmp/TestFunctionalserialLogsFileCmd137936095/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-486060 logs --file /tmp/TestFunctionalserialLogsFileCmd137936095/001/logs.txt: (1.691816647s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.72s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-486060 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-486060
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-486060: exit status 115 (590.483343ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32319 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-486060 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.72s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-486060 config get cpus: exit status 14 (95.625373ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-486060 config get cpus: exit status 14 (73.421871ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-486060 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-486060 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1381329: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.77s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-486060 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-486060 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (187.714884ms)

                                                
                                                
-- stdout --
	* [functional-486060] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:44:54.223049 1381053 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:44:54.223205 1381053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:44:54.223230 1381053 out.go:304] Setting ErrFile to fd 2...
	I0528 21:44:54.223243 1381053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:44:54.223664 1381053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
	I0528 21:44:54.224133 1381053 out.go:298] Setting JSON to false
	I0528 21:44:54.225274 1381053 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":19643,"bootTime":1716913052,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0528 21:44:54.225348 1381053 start.go:139] virtualization:  
	I0528 21:44:54.228462 1381053 out.go:177] * [functional-486060] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0528 21:44:54.233085 1381053 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:44:54.235075 1381053 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:44:54.233174 1381053 notify.go:220] Checking for updates...
	I0528 21:44:54.237093 1381053 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	I0528 21:44:54.239218 1381053 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	I0528 21:44:54.241212 1381053 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0528 21:44:54.243288 1381053 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:44:54.246273 1381053 config.go:182] Loaded profile config "functional-486060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:44:54.247175 1381053 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:44:54.269102 1381053 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 21:44:54.269231 1381053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:44:54.346213 1381053 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-05-28 21:44:54.335418808 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:44:54.346453 1381053 docker.go:295] overlay module found
	I0528 21:44:54.350361 1381053 out.go:177] * Using the docker driver based on existing profile
	I0528 21:44:54.352330 1381053 start.go:297] selected driver: docker
	I0528 21:44:54.352351 1381053 start.go:901] validating driver "docker" against &{Name:functional-486060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-486060 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:44:54.352461 1381053 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:44:54.354789 1381053 out.go:177] 
	W0528 21:44:54.356758 1381053 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0528 21:44:54.358784 1381053 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-486060 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-486060 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-486060 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (201.887786ms)

                                                
                                                
-- stdout --
	* [functional-486060] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:44:54.030867 1381014 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:44:54.031015 1381014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:44:54.031035 1381014 out.go:304] Setting ErrFile to fd 2...
	I0528 21:44:54.031040 1381014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:44:54.031419 1381014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
	I0528 21:44:54.031842 1381014 out.go:298] Setting JSON to false
	I0528 21:44:54.032951 1381014 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":19642,"bootTime":1716913052,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0528 21:44:54.033032 1381014 start.go:139] virtualization:  
	I0528 21:44:54.036345 1381014 out.go:177] * [functional-486060] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0528 21:44:54.038431 1381014 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:44:54.040700 1381014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:44:54.038590 1381014 notify.go:220] Checking for updates...
	I0528 21:44:54.043163 1381014 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	I0528 21:44:54.045324 1381014 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	I0528 21:44:54.047288 1381014 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0528 21:44:54.049434 1381014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:44:54.051926 1381014 config.go:182] Loaded profile config "functional-486060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:44:54.052536 1381014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:44:54.073587 1381014 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 21:44:54.073713 1381014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:44:54.155161 1381014 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-05-28 21:44:54.142486357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:44:54.155274 1381014 docker.go:295] overlay module found
	I0528 21:44:54.158290 1381014 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0528 21:44:54.160692 1381014 start.go:297] selected driver: docker
	I0528 21:44:54.160712 1381014 start.go:901] validating driver "docker" against &{Name:functional-486060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-486060 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:44:54.160835 1381014 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:44:54.164139 1381014 out.go:177] 
	W0528 21:44:54.166672 1381014 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0528 21:44:54.169229 1381014 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-486060 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-486060 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-dnmrk" [3985aca9-770c-4322-ac50-1eebad7c9a47] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0528 21:44:33.257937 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
helpers_test.go:344: "hello-node-connect-6f49f58cd5-dnmrk" [3985aca9-770c-4322-ac50-1eebad7c9a47] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.006182715s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32000
functional_test.go:1671: http://192.168.49.2:32000: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-dnmrk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32000
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [34299724-ab4b-476a-95f9-2fc4a0c89e34] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00658937s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-486060 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-486060 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-486060 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-486060 apply -f testdata/storage-provisioner/pod.yaml
E0528 21:44:29.416998 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f3ef88d6-95f6-42c9-9a8a-c06576f59f2d] Pending
helpers_test.go:344: "sp-pod" [f3ef88d6-95f6-42c9-9a8a-c06576f59f2d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0528 21:44:30.697711 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [f3ef88d6-95f6-42c9-9a8a-c06576f59f2d] Running
E0528 21:44:38.379016 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003253916s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-486060 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-486060 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-486060 delete -f testdata/storage-provisioner/pod.yaml: (1.265803268s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-486060 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2e496014-1287-4a58-975a-a35d6e92467b] Pending
helpers_test.go:344: "sp-pod" [2e496014-1287-4a58-975a-a35d6e92467b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004036015s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-486060 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.25s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh -n functional-486060 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 cp functional-486060:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3932945224/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh -n functional-486060 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh -n functional-486060 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1355197/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "sudo cat /etc/test/nested/copy/1355197/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1355197.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "sudo cat /etc/ssl/certs/1355197.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1355197.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "sudo cat /usr/share/ca-certificates/1355197.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/13551972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "sudo cat /etc/ssl/certs/13551972.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/13551972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "sudo cat /usr/share/ca-certificates/13551972.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-486060 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-486060 ssh "sudo systemctl is-active docker": exit status 1 (333.506451ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-486060 ssh "sudo systemctl is-active containerd": exit status 1 (327.84005ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-486060 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-486060 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-486060 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-486060 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1379026: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-486060 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-486060 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [15fc7fb0-637c-4ec3-a50a-1762f59bf49c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [15fc7fb0-637c-4ec3-a50a-1762f59bf49c] Running
E0528 21:44:28.138293 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
E0528 21:44:28.144485 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
E0528 21:44:28.154784 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
E0528 21:44:28.175139 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
E0528 21:44:28.215514 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
E0528 21:44:28.295894 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
E0528 21:44:28.456274 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
E0528 21:44:28.776822 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004457781s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-486060 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.49.130 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-486060 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-486060 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-486060 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-hwkvd" [7bbebc13-7a61-4b04-a68a-44abaa77a6f9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-hwkvd" [7bbebc13-7a61-4b04-a68a-44abaa77a6f9] Running
E0528 21:44:48.619272 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003931586s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "397.445057ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "58.64726ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 service list -o json
functional_test.go:1490: Took "595.668919ms" to run "out/minikube-linux-arm64 -p functional-486060 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "390.00563ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "71.601387ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31526
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-486060 /tmp/TestFunctionalparallelMountCmdany-port1021792219/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1716932691705829091" to /tmp/TestFunctionalparallelMountCmdany-port1021792219/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1716932691705829091" to /tmp/TestFunctionalparallelMountCmdany-port1021792219/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1716932691705829091" to /tmp/TestFunctionalparallelMountCmdany-port1021792219/001/test-1716932691705829091
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 28 21:44 created-by-test
-rw-r--r-- 1 docker docker 24 May 28 21:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 28 21:44 test-1716932691705829091
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh cat /mount-9p/test-1716932691705829091
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-486060 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [329ca01c-c40b-4e76-9a7b-87e53a34e307] Pending
helpers_test.go:344: "busybox-mount" [329ca01c-c40b-4e76-9a7b-87e53a34e307] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [329ca01c-c40b-4e76-9a7b-87e53a34e307] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [329ca01c-c40b-4e76-9a7b-87e53a34e307] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004426676s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-486060 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-486060 /tmp/TestFunctionalparallelMountCmdany-port1021792219/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31526
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-486060 /tmp/TestFunctionalparallelMountCmdspecific-port903864117/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-486060 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (600.918912ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-486060 /tmp/TestFunctionalparallelMountCmdspecific-port903864117/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-486060 ssh "sudo umount -f /mount-9p": exit status 1 (369.540562ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-486060 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-486060 /tmp/TestFunctionalparallelMountCmdspecific-port903864117/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-486060 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3339478686/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-486060 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3339478686/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-486060 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3339478686/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-486060 ssh "findmnt -T" /mount1: exit status 1 (1.063257834s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-486060 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-486060 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3339478686/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-486060 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3339478686/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-486060 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3339478686/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 version -o=json --components
E0528 21:45:09.099824 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-486060 version -o=json --components: (1.034951101s)
--- PASS: TestFunctional/parallel/Version/components (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-486060 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-486060
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240513-cd2ac642
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-486060 image ls --format short --alsologtostderr:
I0528 21:45:26.524056 1383648 out.go:291] Setting OutFile to fd 1 ...
I0528 21:45:26.524268 1383648 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:45:26.524293 1383648 out.go:304] Setting ErrFile to fd 2...
I0528 21:45:26.524312 1383648 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:45:26.524576 1383648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
I0528 21:45:26.526889 1383648 config.go:182] Loaded profile config "functional-486060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 21:45:26.527291 1383648 config.go:182] Loaded profile config "functional-486060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 21:45:26.529731 1383648 cli_runner.go:164] Run: docker container inspect functional-486060 --format={{.State.Status}}
I0528 21:45:26.547796 1383648 ssh_runner.go:195] Run: systemctl --version
I0528 21:45:26.547850 1383648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-486060
I0528 21:45:26.576313 1383648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34309 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/functional-486060/id_rsa Username:docker}
I0528 21:45:26.667354 1383648 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-486060 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240513-cd2ac642 | 89d73d416b992 | 62MB   |
| gcr.io/google-containers/addon-resizer  | functional-486060  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-controller-manager | v1.30.1            | 234ac56e455be | 108MB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | alpine             | 9d6767b714bf1 | 51.5MB |
| docker.io/library/nginx                 | latest             | 8dd77ef2d82ea | 197MB  |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| registry.k8s.io/kube-apiserver          | v1.30.1            | 988b55d423baf | 114MB  |
| registry.k8s.io/kube-proxy              | v1.30.1            | 05eccb821e159 | 89.1MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/kube-scheduler          | v1.30.1            | 163ff818d154d | 61.6MB |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4740c1948d3fc | 60.9MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-486060 image ls --format table --alsologtostderr:
I0528 21:45:27.145041 1383779 out.go:291] Setting OutFile to fd 1 ...
I0528 21:45:27.148365 1383779 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:45:27.148413 1383779 out.go:304] Setting ErrFile to fd 2...
I0528 21:45:27.148436 1383779 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:45:27.148734 1383779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
I0528 21:45:27.149469 1383779 config.go:182] Loaded profile config "functional-486060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 21:45:27.149641 1383779 config.go:182] Loaded profile config "functional-486060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 21:45:27.150244 1383779 cli_runner.go:164] Run: docker container inspect functional-486060 --format={{.State.Status}}
I0528 21:45:27.182533 1383779 ssh_runner.go:195] Run: systemctl --version
I0528 21:45:27.182591 1383779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-486060
I0528 21:45:27.209020 1383779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34309 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/functional-486060/id_rsa Username:docker}
I0528 21:45:27.298587 1383779 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-486060 image ls --format json --alsologtostderr:
[{"id":"9d6767b714bf1ecd2cdab75b590f2c572ac33743c7786ef5d619f7b088dbe0bb","repoDigests":["docker.io/library/nginx@sha256:05325b3a32db871dc396a859d9a9609d75f50d2f7ad12194f9f3a550111bdcaa","docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00"],"repoTags":["docker.io/library/nginx:alpine"],"size":"51540272"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"234ac56e455bef8ae70f120f49fcf5daea5c4171c5ffbf8f9e791a516fef99b4","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52","registry.k8s.io/kube-controller-manager@sha256:7107370c7cd3eba054a9326c2856988e79c9364e0244c530
26dd87111c8e1882"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"108229958"},{"id":"4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"60940831"},{"id":"89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40","repoDigests":["docker.io/kindest/kindnetd@sha256:1770ac17c925dfef54061d598c65310ff99269a3a77d5c7257f04366b38c64be","docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"62007858"},{"id":"988b55d423baf54b1515b7560890c84d1822a8dfbdfcecfb2576f8cb5c2b28ee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db
1cbb973b4ea","registry.k8s.io/kube-apiserver@sha256:9015c784f0e3e72028f801f3331bf3149db3c04b9212bc53f08c1e8924597bf7"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"113538528"},{"id":"05eccb821e159092de1ef74c21cd16c403e1cda549ba56687391f50f087310ee","repoDigests":["registry.k8s.io/kube-proxy@sha256:40a978ff6e378a33e3508910a74993bf9b442ad0d97c7b939f4324db51602c28","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"89133975"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"]
,"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"163ff818d154da230c6c8a6211a0d939689ba04df75dba2c3742fe740ffdf44a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036","registry.k8s.io/kube-scheduler@sha256:fba503a1eff02dfe4d3c91ad7f52cb6d298fe53709046e9025a35ef9af20e236"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"61568326"},{"id":"8dd77ef2d82eade8dcf2c08ea032bd9cba04c9d28ace2ccf08ad6804c27bf14f","repoDigests":["docker.io/library/nginx@sha256:557b2c07439ee9e53cb178e3bdbb87114b31c48a41a17c8750c5908d65adeec6","docker.io/
library/nginx@sha256:a484819eb60211f5299034ac80f6a681b06f89e65866ce91f356ed7c72af059c"],"repoTags":["docker.io/library/nginx:latest"],"size":"197095429"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-486060"],"size":"34114467"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"5200
14"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-486060 image ls --format json --alsologtostderr:
I0528 21:45:26.818814 1383708 out.go:291] Setting OutFile to fd 1 ...
I0528 21:45:26.818953 1383708 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:45:26.818968 1383708 out.go:304] Setting ErrFile to fd 2...
I0528 21:45:26.818974 1383708 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:45:26.819249 1383708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
I0528 21:45:26.819981 1383708 config.go:182] Loaded profile config "functional-486060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 21:45:26.820103 1383708 config.go:182] Loaded profile config "functional-486060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 21:45:26.820622 1383708 cli_runner.go:164] Run: docker container inspect functional-486060 --format={{.State.Status}}
I0528 21:45:26.839131 1383708 ssh_runner.go:195] Run: systemctl --version
I0528 21:45:26.839192 1383708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-486060
I0528 21:45:26.874251 1383708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34309 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/functional-486060/id_rsa Username:docker}
I0528 21:45:26.973555 1383708 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-486060 image ls --format yaml --alsologtostderr:
- id: 9d6767b714bf1ecd2cdab75b590f2c572ac33743c7786ef5d619f7b088dbe0bb
repoDigests:
- docker.io/library/nginx@sha256:05325b3a32db871dc396a859d9a9609d75f50d2f7ad12194f9f3a550111bdcaa
- docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00
repoTags:
- docker.io/library/nginx:alpine
size: "51540272"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 234ac56e455bef8ae70f120f49fcf5daea5c4171c5ffbf8f9e791a516fef99b4
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52
- registry.k8s.io/kube-controller-manager@sha256:7107370c7cd3eba054a9326c2856988e79c9364e0244c53026dd87111c8e1882
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "108229958"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "60940831"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 8dd77ef2d82eade8dcf2c08ea032bd9cba04c9d28ace2ccf08ad6804c27bf14f
repoDigests:
- docker.io/library/nginx@sha256:557b2c07439ee9e53cb178e3bdbb87114b31c48a41a17c8750c5908d65adeec6
- docker.io/library/nginx@sha256:a484819eb60211f5299034ac80f6a681b06f89e65866ce91f356ed7c72af059c
repoTags:
- docker.io/library/nginx:latest
size: "197095429"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-486060
size: "34114467"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40
repoDigests:
- docker.io/kindest/kindnetd@sha256:1770ac17c925dfef54061d598c65310ff99269a3a77d5c7257f04366b38c64be
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "62007858"
- id: 988b55d423baf54b1515b7560890c84d1822a8dfbdfcecfb2576f8cb5c2b28ee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea
- registry.k8s.io/kube-apiserver@sha256:9015c784f0e3e72028f801f3331bf3149db3c04b9212bc53f08c1e8924597bf7
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "113538528"
- id: 05eccb821e159092de1ef74c21cd16c403e1cda549ba56687391f50f087310ee
repoDigests:
- registry.k8s.io/kube-proxy@sha256:40a978ff6e378a33e3508910a74993bf9b442ad0d97c7b939f4324db51602c28
- registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "89133975"
- id: 163ff818d154da230c6c8a6211a0d939689ba04df75dba2c3742fe740ffdf44a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036
- registry.k8s.io/kube-scheduler@sha256:fba503a1eff02dfe4d3c91ad7f52cb6d298fe53709046e9025a35ef9af20e236
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "61568326"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-486060 image ls --format yaml --alsologtostderr:
I0528 21:45:26.504807 1383647 out.go:291] Setting OutFile to fd 1 ...
I0528 21:45:26.505006 1383647 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:45:26.505031 1383647 out.go:304] Setting ErrFile to fd 2...
I0528 21:45:26.505051 1383647 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:45:26.505321 1383647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
I0528 21:45:26.506060 1383647 config.go:182] Loaded profile config "functional-486060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 21:45:26.506245 1383647 config.go:182] Loaded profile config "functional-486060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 21:45:26.506900 1383647 cli_runner.go:164] Run: docker container inspect functional-486060 --format={{.State.Status}}
I0528 21:45:26.531149 1383647 ssh_runner.go:195] Run: systemctl --version
I0528 21:45:26.531204 1383647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-486060
I0528 21:45:26.549574 1383647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34309 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/functional-486060/id_rsa Username:docker}
I0528 21:45:26.642243 1383647 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-486060 ssh pgrep buildkitd: exit status 1 (340.962731ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image build -t localhost/my-image:functional-486060 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-486060 image build -t localhost/my-image:functional-486060 testdata/build --alsologtostderr: (2.133540056s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-486060 image build -t localhost/my-image:functional-486060 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> bb2f2108e1b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-486060
--> bd7cd2f45d0
Successfully tagged localhost/my-image:functional-486060
bd7cd2f45d069762278ffc7e657bf265f2b6595f26394d067355f81eb208e43e
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-486060 image build -t localhost/my-image:functional-486060 testdata/build --alsologtostderr:
I0528 21:45:27.158426 1383783 out.go:291] Setting OutFile to fd 1 ...
I0528 21:45:27.159051 1383783 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:45:27.159064 1383783 out.go:304] Setting ErrFile to fd 2...
I0528 21:45:27.159070 1383783 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:45:27.159423 1383783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
I0528 21:45:27.160242 1383783 config.go:182] Loaded profile config "functional-486060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 21:45:27.161094 1383783 config.go:182] Loaded profile config "functional-486060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 21:45:27.162130 1383783 cli_runner.go:164] Run: docker container inspect functional-486060 --format={{.State.Status}}
I0528 21:45:27.202485 1383783 ssh_runner.go:195] Run: systemctl --version
I0528 21:45:27.202561 1383783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-486060
I0528 21:45:27.227759 1383783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34309 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/functional-486060/id_rsa Username:docker}
I0528 21:45:27.326433 1383783 build_images.go:161] Building image from path: /tmp/build.2701588960.tar
I0528 21:45:27.326507 1383783 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0528 21:45:27.342475 1383783 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2701588960.tar
I0528 21:45:27.346141 1383783 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2701588960.tar: stat -c "%s %y" /var/lib/minikube/build/build.2701588960.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2701588960.tar': No such file or directory
I0528 21:45:27.346186 1383783 ssh_runner.go:362] scp /tmp/build.2701588960.tar --> /var/lib/minikube/build/build.2701588960.tar (3072 bytes)
I0528 21:45:27.372969 1383783 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2701588960
I0528 21:45:27.381917 1383783 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2701588960 -xf /var/lib/minikube/build/build.2701588960.tar
I0528 21:45:27.392106 1383783 crio.go:315] Building image: /var/lib/minikube/build/build.2701588960
I0528 21:45:27.392192 1383783 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-486060 /var/lib/minikube/build/build.2701588960 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0528 21:45:29.147430 1383783 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-486060 /var/lib/minikube/build/build.2701588960 --cgroup-manager=cgroupfs: (1.755208766s)
I0528 21:45:29.147507 1383783 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2701588960
I0528 21:45:29.158730 1383783 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2701588960.tar
I0528 21:45:29.168043 1383783 build_images.go:217] Built localhost/my-image:functional-486060 from /tmp/build.2701588960.tar
I0528 21:45:29.168125 1383783 build_images.go:133] succeeded building to: functional-486060
I0528 21:45:29.168136 1383783 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2024/05/28 21:45:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.119837608s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-486060
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image load --daemon gcr.io/google-containers/addon-resizer:functional-486060 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-486060 image load --daemon gcr.io/google-containers/addon-resizer:functional-486060 --alsologtostderr: (4.252892441s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image load --daemon gcr.io/google-containers/addon-resizer:functional-486060 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-486060 image load --daemon gcr.io/google-containers/addon-resizer:functional-486060 --alsologtostderr: (2.762403667s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.562818389s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-486060
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image load --daemon gcr.io/google-containers/addon-resizer:functional-486060 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-486060 image load --daemon gcr.io/google-containers/addon-resizer:functional-486060 --alsologtostderr: (3.606662564s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image save gcr.io/google-containers/addon-resizer:functional-486060 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image rm gcr.io/google-containers/addon-resizer:functional-486060 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-486060 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.01614172s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-486060
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-486060 image save --daemon gcr.io/google-containers/addon-resizer:functional-486060 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-486060
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-486060
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-486060
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-486060
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (165.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-054009 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0528 21:45:50.060061 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
E0528 21:47:11.980841 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-054009 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m44.909206987s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (165.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-054009 -- rollout status deployment/busybox: (3.483573259s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- exec busybox-fc5497c4f-6phfl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- exec busybox-fc5497c4f-6ppf6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- exec busybox-fc5497c4f-xmx9k -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- exec busybox-fc5497c4f-6phfl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- exec busybox-fc5497c4f-6ppf6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- exec busybox-fc5497c4f-xmx9k -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- exec busybox-fc5497c4f-6phfl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- exec busybox-fc5497c4f-6ppf6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- exec busybox-fc5497c4f-xmx9k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- exec busybox-fc5497c4f-6phfl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- exec busybox-fc5497c4f-6phfl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- exec busybox-fc5497c4f-6ppf6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- exec busybox-fc5497c4f-6ppf6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- exec busybox-fc5497c4f-xmx9k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-054009 -- exec busybox-fc5497c4f-xmx9k -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-054009 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-054009 -v=7 --alsologtostderr: (23.341511191s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-054009 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp testdata/cp-test.txt ha-054009:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2766292395/001/cp-test_ha-054009.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009:/home/docker/cp-test.txt ha-054009-m02:/home/docker/cp-test_ha-054009_ha-054009-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m02 "sudo cat /home/docker/cp-test_ha-054009_ha-054009-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009:/home/docker/cp-test.txt ha-054009-m03:/home/docker/cp-test_ha-054009_ha-054009-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m03 "sudo cat /home/docker/cp-test_ha-054009_ha-054009-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009:/home/docker/cp-test.txt ha-054009-m04:/home/docker/cp-test_ha-054009_ha-054009-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m04 "sudo cat /home/docker/cp-test_ha-054009_ha-054009-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp testdata/cp-test.txt ha-054009-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2766292395/001/cp-test_ha-054009-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009-m02:/home/docker/cp-test.txt ha-054009:/home/docker/cp-test_ha-054009-m02_ha-054009.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009 "sudo cat /home/docker/cp-test_ha-054009-m02_ha-054009.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009-m02:/home/docker/cp-test.txt ha-054009-m03:/home/docker/cp-test_ha-054009-m02_ha-054009-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m03 "sudo cat /home/docker/cp-test_ha-054009-m02_ha-054009-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009-m02:/home/docker/cp-test.txt ha-054009-m04:/home/docker/cp-test_ha-054009-m02_ha-054009-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m04 "sudo cat /home/docker/cp-test_ha-054009-m02_ha-054009-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp testdata/cp-test.txt ha-054009-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2766292395/001/cp-test_ha-054009-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009-m03:/home/docker/cp-test.txt ha-054009:/home/docker/cp-test_ha-054009-m03_ha-054009.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009 "sudo cat /home/docker/cp-test_ha-054009-m03_ha-054009.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009-m03:/home/docker/cp-test.txt ha-054009-m02:/home/docker/cp-test_ha-054009-m03_ha-054009-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m02 "sudo cat /home/docker/cp-test_ha-054009-m03_ha-054009-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009-m03:/home/docker/cp-test.txt ha-054009-m04:/home/docker/cp-test_ha-054009-m03_ha-054009-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m04 "sudo cat /home/docker/cp-test_ha-054009-m03_ha-054009-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp testdata/cp-test.txt ha-054009-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2766292395/001/cp-test_ha-054009-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009-m04:/home/docker/cp-test.txt ha-054009:/home/docker/cp-test_ha-054009-m04_ha-054009.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009 "sudo cat /home/docker/cp-test_ha-054009-m04_ha-054009.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009-m04:/home/docker/cp-test.txt ha-054009-m02:/home/docker/cp-test_ha-054009-m04_ha-054009-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m02 "sudo cat /home/docker/cp-test_ha-054009-m04_ha-054009-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 cp ha-054009-m04:/home/docker/cp-test.txt ha-054009-m03:/home/docker/cp-test_ha-054009-m04_ha-054009-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 ssh -n ha-054009-m03 "sudo cat /home/docker/cp-test_ha-054009-m04_ha-054009-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-054009 node stop m02 -v=7 --alsologtostderr: (11.951876918s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-054009 status -v=7 --alsologtostderr: exit status 7 (698.970671ms)

                                                
                                                
-- stdout --
	ha-054009
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-054009-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-054009-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-054009-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:49:21.379956 1398685 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:49:21.380213 1398685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:49:21.380242 1398685 out.go:304] Setting ErrFile to fd 2...
	I0528 21:49:21.380262 1398685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:49:21.380520 1398685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
	I0528 21:49:21.380739 1398685 out.go:298] Setting JSON to false
	I0528 21:49:21.380789 1398685 mustload.go:65] Loading cluster: ha-054009
	I0528 21:49:21.380865 1398685 notify.go:220] Checking for updates...
	I0528 21:49:21.381807 1398685 config.go:182] Loaded profile config "ha-054009": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:49:21.381850 1398685 status.go:255] checking status of ha-054009 ...
	I0528 21:49:21.382470 1398685 cli_runner.go:164] Run: docker container inspect ha-054009 --format={{.State.Status}}
	I0528 21:49:21.402893 1398685 status.go:330] ha-054009 host status = "Running" (err=<nil>)
	I0528 21:49:21.402921 1398685 host.go:66] Checking if "ha-054009" exists ...
	I0528 21:49:21.403810 1398685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-054009
	I0528 21:49:21.420981 1398685 host.go:66] Checking if "ha-054009" exists ...
	I0528 21:49:21.421432 1398685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 21:49:21.421484 1398685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-054009
	I0528 21:49:21.443368 1398685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34314 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/ha-054009/id_rsa Username:docker}
	I0528 21:49:21.539663 1398685 ssh_runner.go:195] Run: systemctl --version
	I0528 21:49:21.544224 1398685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:49:21.556661 1398685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:49:21.615715 1398685 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:72 SystemTime:2024-05-28 21:49:21.606165387 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:49:21.616314 1398685 kubeconfig.go:125] found "ha-054009" server: "https://192.168.49.254:8443"
	I0528 21:49:21.616354 1398685 api_server.go:166] Checking apiserver status ...
	I0528 21:49:21.616400 1398685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:21.627040 1398685 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1389/cgroup
	I0528 21:49:21.636070 1398685 api_server.go:182] apiserver freezer: "12:freezer:/docker/0249df60a4b1a63092de38d37f4ef234df7b904a1d191c3bba033baa14d49bf2/crio/crio-1e93a7b657fc95cdfd0c6ac38e85e16fe8cc2d536a09aec6c4a16db624aeb31e"
	I0528 21:49:21.636149 1398685 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0249df60a4b1a63092de38d37f4ef234df7b904a1d191c3bba033baa14d49bf2/crio/crio-1e93a7b657fc95cdfd0c6ac38e85e16fe8cc2d536a09aec6c4a16db624aeb31e/freezer.state
	I0528 21:49:21.645240 1398685 api_server.go:204] freezer state: "THAWED"
	I0528 21:49:21.645266 1398685 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0528 21:49:21.653967 1398685 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0528 21:49:21.653996 1398685 status.go:422] ha-054009 apiserver status = Running (err=<nil>)
	I0528 21:49:21.654008 1398685 status.go:257] ha-054009 status: &{Name:ha-054009 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 21:49:21.654135 1398685 status.go:255] checking status of ha-054009-m02 ...
	I0528 21:49:21.654451 1398685 cli_runner.go:164] Run: docker container inspect ha-054009-m02 --format={{.State.Status}}
	I0528 21:49:21.672308 1398685 status.go:330] ha-054009-m02 host status = "Stopped" (err=<nil>)
	I0528 21:49:21.672338 1398685 status.go:343] host is not running, skipping remaining checks
	I0528 21:49:21.672346 1398685 status.go:257] ha-054009-m02 status: &{Name:ha-054009-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 21:49:21.672369 1398685 status.go:255] checking status of ha-054009-m03 ...
	I0528 21:49:21.672799 1398685 cli_runner.go:164] Run: docker container inspect ha-054009-m03 --format={{.State.Status}}
	I0528 21:49:21.690802 1398685 status.go:330] ha-054009-m03 host status = "Running" (err=<nil>)
	I0528 21:49:21.690829 1398685 host.go:66] Checking if "ha-054009-m03" exists ...
	I0528 21:49:21.691145 1398685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-054009-m03
	I0528 21:49:21.707356 1398685 host.go:66] Checking if "ha-054009-m03" exists ...
	I0528 21:49:21.707669 1398685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 21:49:21.707712 1398685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-054009-m03
	I0528 21:49:21.724144 1398685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/ha-054009-m03/id_rsa Username:docker}
	I0528 21:49:21.817461 1398685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:49:21.829955 1398685 kubeconfig.go:125] found "ha-054009" server: "https://192.168.49.254:8443"
	I0528 21:49:21.829985 1398685 api_server.go:166] Checking apiserver status ...
	I0528 21:49:21.830067 1398685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:21.841490 1398685 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1314/cgroup
	I0528 21:49:21.851597 1398685 api_server.go:182] apiserver freezer: "12:freezer:/docker/2d2ff91b7da65a625daf338fd172d737cdaaec855fb1a64c61bf0ed269da2b66/crio/crio-54533a6d5b5ff9178d4878da05efd718378d17e605ebcea0778db21e03164d98"
	I0528 21:49:21.851716 1398685 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2d2ff91b7da65a625daf338fd172d737cdaaec855fb1a64c61bf0ed269da2b66/crio/crio-54533a6d5b5ff9178d4878da05efd718378d17e605ebcea0778db21e03164d98/freezer.state
	I0528 21:49:21.861019 1398685 api_server.go:204] freezer state: "THAWED"
	I0528 21:49:21.861053 1398685 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0528 21:49:21.869123 1398685 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0528 21:49:21.869150 1398685 status.go:422] ha-054009-m03 apiserver status = Running (err=<nil>)
	I0528 21:49:21.869160 1398685 status.go:257] ha-054009-m03 status: &{Name:ha-054009-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 21:49:21.869176 1398685 status.go:255] checking status of ha-054009-m04 ...
	I0528 21:49:21.869483 1398685 cli_runner.go:164] Run: docker container inspect ha-054009-m04 --format={{.State.Status}}
	I0528 21:49:21.885718 1398685 status.go:330] ha-054009-m04 host status = "Running" (err=<nil>)
	I0528 21:49:21.885746 1398685 host.go:66] Checking if "ha-054009-m04" exists ...
	I0528 21:49:21.886112 1398685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-054009-m04
	I0528 21:49:21.902772 1398685 host.go:66] Checking if "ha-054009-m04" exists ...
	I0528 21:49:21.903067 1398685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 21:49:21.903120 1398685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-054009-m04
	I0528 21:49:21.924021 1398685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34329 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/ha-054009-m04/id_rsa Username:docker}
	I0528 21:49:22.011491 1398685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:49:22.023348 1398685 status.go:257] ha-054009-m04 status: &{Name:ha-054009-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0528 21:49:22.540686 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 21:49:22.546694 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 node start m02 -v=7 --alsologtostderr
E0528 21:49:22.557596 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 21:49:22.577803 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 21:49:22.618727 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 21:49:22.700843 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 21:49:22.860981 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 21:49:23.181555 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 21:49:23.822438 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 21:49:25.102653 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 21:49:27.662831 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 21:49:28.137808 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
E0528 21:49:32.783046 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 21:49:43.023257 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-054009 node start m02 -v=7 --alsologtostderr: (21.236612635s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-054009 status -v=7 --alsologtostderr: (1.43250162s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (4.614587897s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (193.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-054009 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-054009 -v=7 --alsologtostderr
E0528 21:49:55.821119 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
E0528 21:50:03.503914 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-054009 -v=7 --alsologtostderr: (36.893753215s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-054009 --wait=true -v=7 --alsologtostderr
E0528 21:50:44.464653 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 21:52:06.384961 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-054009 --wait=true -v=7 --alsologtostderr: (2m36.326204715s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-054009
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (193.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-054009 node delete m03 -v=7 --alsologtostderr: (12.302578091s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (13.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-054009 stop -v=7 --alsologtostderr: (35.632762159s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-054009 status -v=7 --alsologtostderr: exit status 7 (103.135158ms)

                                                
                                                
-- stdout --
	ha-054009
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-054009-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-054009-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:53:52.795148 1412752 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:53:52.795358 1412752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:53:52.795386 1412752 out.go:304] Setting ErrFile to fd 2...
	I0528 21:53:52.795407 1412752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:53:52.795661 1412752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
	I0528 21:53:52.795884 1412752 out.go:298] Setting JSON to false
	I0528 21:53:52.795936 1412752 mustload.go:65] Loading cluster: ha-054009
	I0528 21:53:52.796038 1412752 notify.go:220] Checking for updates...
	I0528 21:53:52.796409 1412752 config.go:182] Loaded profile config "ha-054009": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:53:52.796444 1412752 status.go:255] checking status of ha-054009 ...
	I0528 21:53:52.797264 1412752 cli_runner.go:164] Run: docker container inspect ha-054009 --format={{.State.Status}}
	I0528 21:53:52.815190 1412752 status.go:330] ha-054009 host status = "Stopped" (err=<nil>)
	I0528 21:53:52.815214 1412752 status.go:343] host is not running, skipping remaining checks
	I0528 21:53:52.815222 1412752 status.go:257] ha-054009 status: &{Name:ha-054009 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 21:53:52.815254 1412752 status.go:255] checking status of ha-054009-m02 ...
	I0528 21:53:52.815573 1412752 cli_runner.go:164] Run: docker container inspect ha-054009-m02 --format={{.State.Status}}
	I0528 21:53:52.832804 1412752 status.go:330] ha-054009-m02 host status = "Stopped" (err=<nil>)
	I0528 21:53:52.832830 1412752 status.go:343] host is not running, skipping remaining checks
	I0528 21:53:52.832837 1412752 status.go:257] ha-054009-m02 status: &{Name:ha-054009-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 21:53:52.832860 1412752 status.go:255] checking status of ha-054009-m04 ...
	I0528 21:53:52.833163 1412752 cli_runner.go:164] Run: docker container inspect ha-054009-m04 --format={{.State.Status}}
	I0528 21:53:52.851584 1412752 status.go:330] ha-054009-m04 host status = "Stopped" (err=<nil>)
	I0528 21:53:52.851606 1412752 status.go:343] host is not running, skipping remaining checks
	I0528 21:53:52.851614 1412752 status.go:257] ha-054009-m04 status: &{Name:ha-054009-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (95.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-054009 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0528 21:54:22.540416 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 21:54:28.137971 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
E0528 21:54:50.225113 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-054009 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m34.960613165s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (95.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (63.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-054009 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-054009 --control-plane -v=7 --alsologtostderr: (1m2.213110801s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-054009 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (63.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-717551 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-717551 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (51.389547773s)
--- PASS: TestJSONOutput/start/Command (51.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-717551 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-717551 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-717551 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-717551 --output=json --user=testUser: (5.805257931s)
--- PASS: TestJSONOutput/stop/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-765077 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-765077 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.901855ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"275729b5-65d4-49e4-a488-1e8a99556c2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-765077] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"33d7ff27-e1a0-4336-aeb5-6b5b4a090163","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18966"}}
	{"specversion":"1.0","id":"eae06eb0-9e21-486d-9095-851e845b2c8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"07b427b9-15d0-4428-a9bd-08a8f32ed659","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig"}}
	{"specversion":"1.0","id":"3f986880-24bc-4c7b-b6b9-bec596aac311","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube"}}
	{"specversion":"1.0","id":"b70b84d9-be58-4ece-863d-1607f394c44e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a1660b1b-3f49-49ef-bbe3-2ad4489336af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"61c8dff1-1990-4f87-9467-f5ab8db3f96a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-765077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-765077
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-728301 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-728301 --network=: (38.537018412s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-728301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-728301
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-728301: (2.061460661s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.62s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.76s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-454413 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-454413 --network=bridge: (31.863362557s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-454413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-454413
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-454413: (1.880340506s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.76s)

                                                
                                    
x
+
TestKicExistingNetwork (34.63s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-789239 --network=existing-network
E0528 21:59:22.540409 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 21:59:28.138168 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-789239 --network=existing-network: (32.510065218s)
helpers_test.go:175: Cleaning up "existing-network-789239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-789239
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-789239: (1.986640409s)
--- PASS: TestKicExistingNetwork (34.63s)

                                                
                                    
x
+
TestKicCustomSubnet (33.25s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-967331 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-967331 --subnet=192.168.60.0/24: (31.066194708s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-967331 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-967331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-967331
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-967331: (2.160532973s)
--- PASS: TestKicCustomSubnet (33.25s)

                                                
                                    
x
+
TestKicStaticIP (33.13s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-754222 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-754222 --static-ip=192.168.200.200: (30.819583621s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-754222 ip
helpers_test.go:175: Cleaning up "static-ip-754222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-754222
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-754222: (2.132971179s)
--- PASS: TestKicStaticIP (33.13s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (74.54s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-250366 --driver=docker  --container-runtime=crio
E0528 22:00:51.182173 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-250366 --driver=docker  --container-runtime=crio: (32.296657638s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-253648 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-253648 --driver=docker  --container-runtime=crio: (36.821067126s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-250366
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-253648
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-253648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-253648
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-253648: (1.943060779s)
helpers_test.go:175: Cleaning up "first-250366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-250366
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-250366: (2.276643757s)
--- PASS: TestMinikubeProfile (74.54s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-590906 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-590906 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.142683354s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-590906 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-604551 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-604551 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.254372565s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-604551 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-590906 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-590906 --alsologtostderr -v=5: (1.583932498s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-604551 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-604551
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-604551: (1.200020826s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.52s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-604551
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-604551: (6.519261788s)
--- PASS: TestMountStart/serial/RestartStopped (7.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-604551 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-717064 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-717064 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m6.463728061s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717064 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717064 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-717064 -- rollout status deployment/busybox: (2.855171798s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717064 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717064 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717064 -- exec busybox-fc5497c4f-bppbj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717064 -- exec busybox-fc5497c4f-v79jp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717064 -- exec busybox-fc5497c4f-bppbj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717064 -- exec busybox-fc5497c4f-v79jp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717064 -- exec busybox-fc5497c4f-bppbj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717064 -- exec busybox-fc5497c4f-v79jp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.61s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717064 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717064 -- exec busybox-fc5497c4f-bppbj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717064 -- exec busybox-fc5497c4f-bppbj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717064 -- exec busybox-fc5497c4f-v79jp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717064 -- exec busybox-fc5497c4f-v79jp -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (22.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-717064 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-717064 -v 3 --alsologtostderr: (21.811262146s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (22.43s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-717064 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 cp testdata/cp-test.txt multinode-717064:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 cp multinode-717064:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3587868784/001/cp-test_multinode-717064.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 cp multinode-717064:/home/docker/cp-test.txt multinode-717064-m02:/home/docker/cp-test_multinode-717064_multinode-717064-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064-m02 "sudo cat /home/docker/cp-test_multinode-717064_multinode-717064-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 cp multinode-717064:/home/docker/cp-test.txt multinode-717064-m03:/home/docker/cp-test_multinode-717064_multinode-717064-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064-m03 "sudo cat /home/docker/cp-test_multinode-717064_multinode-717064-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 cp testdata/cp-test.txt multinode-717064-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 cp multinode-717064-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3587868784/001/cp-test_multinode-717064-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 cp multinode-717064-m02:/home/docker/cp-test.txt multinode-717064:/home/docker/cp-test_multinode-717064-m02_multinode-717064.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064 "sudo cat /home/docker/cp-test_multinode-717064-m02_multinode-717064.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 cp multinode-717064-m02:/home/docker/cp-test.txt multinode-717064-m03:/home/docker/cp-test_multinode-717064-m02_multinode-717064-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064-m03 "sudo cat /home/docker/cp-test_multinode-717064-m02_multinode-717064-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 cp testdata/cp-test.txt multinode-717064-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 cp multinode-717064-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3587868784/001/cp-test_multinode-717064-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 cp multinode-717064-m03:/home/docker/cp-test.txt multinode-717064:/home/docker/cp-test_multinode-717064-m03_multinode-717064.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064 "sudo cat /home/docker/cp-test_multinode-717064-m03_multinode-717064.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 cp multinode-717064-m03:/home/docker/cp-test.txt multinode-717064-m02:/home/docker/cp-test_multinode-717064-m03_multinode-717064-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 ssh -n multinode-717064-m02 "sudo cat /home/docker/cp-test_multinode-717064-m03_multinode-717064-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-717064 node stop m03: (1.214724098s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-717064 status: exit status 7 (497.299282ms)

                                                
                                                
-- stdout --
	multinode-717064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-717064-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-717064-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-717064 status --alsologtostderr: exit status 7 (507.617275ms)

                                                
                                                
-- stdout --
	multinode-717064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-717064-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-717064-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 22:04:10.222585 1462898 out.go:291] Setting OutFile to fd 1 ...
	I0528 22:04:10.222788 1462898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:04:10.222800 1462898 out.go:304] Setting ErrFile to fd 2...
	I0528 22:04:10.222805 1462898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:04:10.223092 1462898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
	I0528 22:04:10.223306 1462898 out.go:298] Setting JSON to false
	I0528 22:04:10.223367 1462898 mustload.go:65] Loading cluster: multinode-717064
	I0528 22:04:10.223450 1462898 notify.go:220] Checking for updates...
	I0528 22:04:10.224376 1462898 config.go:182] Loaded profile config "multinode-717064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:04:10.224395 1462898 status.go:255] checking status of multinode-717064 ...
	I0528 22:04:10.224846 1462898 cli_runner.go:164] Run: docker container inspect multinode-717064 --format={{.State.Status}}
	I0528 22:04:10.248041 1462898 status.go:330] multinode-717064 host status = "Running" (err=<nil>)
	I0528 22:04:10.248072 1462898 host.go:66] Checking if "multinode-717064" exists ...
	I0528 22:04:10.248427 1462898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-717064
	I0528 22:04:10.267436 1462898 host.go:66] Checking if "multinode-717064" exists ...
	I0528 22:04:10.267751 1462898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 22:04:10.267832 1462898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717064
	I0528 22:04:10.287619 1462898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34434 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/multinode-717064/id_rsa Username:docker}
	I0528 22:04:10.375370 1462898 ssh_runner.go:195] Run: systemctl --version
	I0528 22:04:10.379570 1462898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 22:04:10.391420 1462898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 22:04:10.455938 1462898 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-05-28 22:04:10.444091376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 22:04:10.456529 1462898 kubeconfig.go:125] found "multinode-717064" server: "https://192.168.67.2:8443"
	I0528 22:04:10.456573 1462898 api_server.go:166] Checking apiserver status ...
	I0528 22:04:10.456618 1462898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:04:10.467812 1462898 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1412/cgroup
	I0528 22:04:10.476893 1462898 api_server.go:182] apiserver freezer: "12:freezer:/docker/d897d958aded9eba9be51e0c9ed40c2b1813d13e9da85ef95491d0cbe0bcd412/crio/crio-6859a9cf9498bbc926d53b2a1c6b50b04e36bc655da3dde5d65256737c2f8b6f"
	I0528 22:04:10.476958 1462898 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d897d958aded9eba9be51e0c9ed40c2b1813d13e9da85ef95491d0cbe0bcd412/crio/crio-6859a9cf9498bbc926d53b2a1c6b50b04e36bc655da3dde5d65256737c2f8b6f/freezer.state
	I0528 22:04:10.485630 1462898 api_server.go:204] freezer state: "THAWED"
	I0528 22:04:10.485658 1462898 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0528 22:04:10.493664 1462898 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0528 22:04:10.493703 1462898 status.go:422] multinode-717064 apiserver status = Running (err=<nil>)
	I0528 22:04:10.493717 1462898 status.go:257] multinode-717064 status: &{Name:multinode-717064 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 22:04:10.493733 1462898 status.go:255] checking status of multinode-717064-m02 ...
	I0528 22:04:10.494068 1462898 cli_runner.go:164] Run: docker container inspect multinode-717064-m02 --format={{.State.Status}}
	I0528 22:04:10.510728 1462898 status.go:330] multinode-717064-m02 host status = "Running" (err=<nil>)
	I0528 22:04:10.510756 1462898 host.go:66] Checking if "multinode-717064-m02" exists ...
	I0528 22:04:10.511062 1462898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-717064-m02
	I0528 22:04:10.527822 1462898 host.go:66] Checking if "multinode-717064-m02" exists ...
	I0528 22:04:10.528130 1462898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 22:04:10.528184 1462898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717064-m02
	I0528 22:04:10.546713 1462898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34439 SSHKeyPath:/home/jenkins/minikube-integration/18966-1349783/.minikube/machines/multinode-717064-m02/id_rsa Username:docker}
	I0528 22:04:10.639339 1462898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 22:04:10.651205 1462898 status.go:257] multinode-717064-m02 status: &{Name:multinode-717064-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0528 22:04:10.651239 1462898 status.go:255] checking status of multinode-717064-m03 ...
	I0528 22:04:10.651621 1462898 cli_runner.go:164] Run: docker container inspect multinode-717064-m03 --format={{.State.Status}}
	I0528 22:04:10.673891 1462898 status.go:330] multinode-717064-m03 host status = "Stopped" (err=<nil>)
	I0528 22:04:10.673915 1462898 status.go:343] host is not running, skipping remaining checks
	I0528 22:04:10.673923 1462898 status.go:257] multinode-717064-m03 status: &{Name:multinode-717064-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-717064 node start m03 -v=7 --alsologtostderr: (8.877125574s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (90.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-717064
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-717064
E0528 22:04:22.540394 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 22:04:28.139071 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-717064: (24.76904232s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-717064 --wait=true -v=8 --alsologtostderr
E0528 22:05:45.585855 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-717064 --wait=true -v=8 --alsologtostderr: (1m5.166847041s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-717064
--- PASS: TestMultiNode/serial/RestartKeepsNodes (90.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-717064 node delete m03: (4.485583362s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-717064 stop: (23.596573489s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-717064 status: exit status 7 (104.937583ms)

                                                
                                                
-- stdout --
	multinode-717064
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-717064-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-717064 status --alsologtostderr: exit status 7 (79.78753ms)

                                                
                                                
-- stdout --
	multinode-717064
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-717064-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 22:06:19.223339 1469962 out.go:291] Setting OutFile to fd 1 ...
	I0528 22:06:19.223474 1469962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:06:19.223485 1469962 out.go:304] Setting ErrFile to fd 2...
	I0528 22:06:19.223490 1469962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:06:19.223742 1469962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
	I0528 22:06:19.223928 1469962 out.go:298] Setting JSON to false
	I0528 22:06:19.223959 1469962 mustload.go:65] Loading cluster: multinode-717064
	I0528 22:06:19.224071 1469962 notify.go:220] Checking for updates...
	I0528 22:06:19.224360 1469962 config.go:182] Loaded profile config "multinode-717064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:06:19.224372 1469962 status.go:255] checking status of multinode-717064 ...
	I0528 22:06:19.224860 1469962 cli_runner.go:164] Run: docker container inspect multinode-717064 --format={{.State.Status}}
	I0528 22:06:19.241856 1469962 status.go:330] multinode-717064 host status = "Stopped" (err=<nil>)
	I0528 22:06:19.241882 1469962 status.go:343] host is not running, skipping remaining checks
	I0528 22:06:19.241889 1469962 status.go:257] multinode-717064 status: &{Name:multinode-717064 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 22:06:19.241921 1469962 status.go:255] checking status of multinode-717064-m02 ...
	I0528 22:06:19.242268 1469962 cli_runner.go:164] Run: docker container inspect multinode-717064-m02 --format={{.State.Status}}
	I0528 22:06:19.260378 1469962 status.go:330] multinode-717064-m02 host status = "Stopped" (err=<nil>)
	I0528 22:06:19.260398 1469962 status.go:343] host is not running, skipping remaining checks
	I0528 22:06:19.260406 1469962 status.go:257] multinode-717064-m02 status: &{Name:multinode-717064-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-717064 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-717064 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (54.351885224s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717064 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.00s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-717064
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-717064-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-717064-m02 --driver=docker  --container-runtime=crio: exit status 14 (87.604095ms)

                                                
                                                
-- stdout --
	* [multinode-717064-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-717064-m02' is duplicated with machine name 'multinode-717064-m02' in profile 'multinode-717064'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-717064-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-717064-m03 --driver=docker  --container-runtime=crio: (32.042050223s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-717064
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-717064: exit status 80 (294.267773ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-717064 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-717064-m03 already exists in multinode-717064-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-717064-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-717064-m03: (1.927199304s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.41s)

                                                
                                    
x
+
TestPreload (116.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-098943 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-098943 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m24.147174913s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-098943 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-098943 image pull gcr.io/k8s-minikube/busybox: (1.814687025s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-098943
E0528 22:09:22.540460 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-098943: (5.755028235s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-098943 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0528 22:09:28.138462 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-098943 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.733556275s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-098943 image list
helpers_test.go:175: Cleaning up "test-preload-098943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-098943
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-098943: (2.353734093s)
--- PASS: TestPreload (116.09s)

                                                
                                    
x
+
TestScheduledStopUnix (108.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-581771 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-581771 --memory=2048 --driver=docker  --container-runtime=crio: (31.986493293s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-581771 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-581771 -n scheduled-stop-581771
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-581771 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-581771 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-581771 -n scheduled-stop-581771
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-581771
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-581771 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-581771
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-581771: exit status 7 (68.299376ms)

                                                
                                                
-- stdout --
	scheduled-stop-581771
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-581771 -n scheduled-stop-581771
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-581771 -n scheduled-stop-581771: exit status 7 (67.684893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-581771" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-581771
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-581771: (5.092224506s)
--- PASS: TestScheduledStopUnix (108.55s)

                                                
                                    
x
+
TestInsufficientStorage (10.58s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-634298 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-634298 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.152342273s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"12d7f19e-5e79-4331-bdbd-0c4e8ad97e47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-634298] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"37d444b0-3e6a-4a14-a58e-a3b990d619b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18966"}}
	{"specversion":"1.0","id":"948ac7ef-334b-4b3d-8a2c-fab6d21907a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0c941b0b-03ef-42b1-a49a-96fd87157b23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig"}}
	{"specversion":"1.0","id":"e9f3c27c-8e09-4be2-8431-ce825d48cf2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube"}}
	{"specversion":"1.0","id":"4513fa77-f04d-4f59-8ce8-1824fbde64fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"56a9290b-ba3b-4a49-a2d0-f13cdee40301","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3ef5063b-ec18-42db-96a5-6875669335a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3998cc56-abb8-45af-8ee2-fd9fd5094473","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"eab73f69-6fb5-47bc-8058-249a32076fcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d28bd2ef-333b-4513-a88a-f8f1898ec077","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"770b2910-4b6b-42df-9b49-d708cdb4fa02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-634298\" primary control-plane node in \"insufficient-storage-634298\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"08d53157-e0f2-4917-92c0-d512f8827974","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1716228441-18934 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c313dd20-5cb9-4971-a631-52ce9c1697da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"935dbc56-24d1-44d2-83d1-18b0f1c9cc27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-634298 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-634298 --output=json --layout=cluster: exit status 7 (274.467278ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-634298","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-634298","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 22:11:45.744030 1486765 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-634298" does not appear in /home/jenkins/minikube-integration/18966-1349783/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-634298 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-634298 --output=json --layout=cluster: exit status 7 (266.240745ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-634298","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-634298","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 22:11:46.011913 1486821 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-634298" does not appear in /home/jenkins/minikube-integration/18966-1349783/kubeconfig
	E0528 22:11:46.022232 1486821 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/insufficient-storage-634298/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-634298" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-634298
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-634298: (1.883440441s)
--- PASS: TestInsufficientStorage (10.58s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (66.14s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1566136159 start -p running-upgrade-767078 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1566136159 start -p running-upgrade-767078 --memory=2200 --vm-driver=docker  --container-runtime=crio: (37.62056804s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-767078 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-767078 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.182200404s)
helpers_test.go:175: Cleaning up "running-upgrade-767078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-767078
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-767078: (3.194606584s)
--- PASS: TestRunningBinaryUpgrade (66.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (389.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-510763 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-510763 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.831501417s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-510763
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-510763: (1.269469696s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-510763 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-510763 status --format={{.Host}}: exit status 7 (68.156059ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-510763 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-510763 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m47.792838686s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-510763 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-510763 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-510763 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (76.303374ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-510763] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-510763
	    minikube start -p kubernetes-upgrade-510763 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5107632 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-510763 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-510763 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0528 22:19:22.540643 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 22:19:28.137907 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-510763 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.626747335s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-510763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-510763
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-510763: (2.359285632s)
--- PASS: TestKubernetesUpgrade (389.11s)

                                                
                                    
x
+
TestMissingContainerUpgrade (147.42s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2624973174 start -p missing-upgrade-068225 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2624973174 start -p missing-upgrade-068225 --memory=2200 --driver=docker  --container-runtime=crio: (1m9.682890934s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-068225
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-068225: (13.039061823s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-068225
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-068225 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-068225 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m1.390535855s)
helpers_test.go:175: Cleaning up "missing-upgrade-068225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-068225
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-068225: (2.03414148s)
--- PASS: TestMissingContainerUpgrade (147.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-273684 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-273684 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (77.27208ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-273684] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-273684 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-273684 --driver=docker  --container-runtime=crio: (40.072644937s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-273684 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-273684 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-273684 --no-kubernetes --driver=docker  --container-runtime=crio: (5.228121453s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-273684 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-273684 status -o json: exit status 2 (361.718906ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-273684","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-273684
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-273684: (2.051730963s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-273684 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-273684 --no-kubernetes --driver=docker  --container-runtime=crio: (9.48094317s)
--- PASS: TestNoKubernetes/serial/Start (9.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-273684 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-273684 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.497888ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (8.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (7.834432113s)
--- PASS: TestNoKubernetes/serial/ProfileList (8.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-273684
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-273684: (1.211737711s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-273684 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-273684 --driver=docker  --container-runtime=crio: (7.005435219s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-273684 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-273684 "sudo systemctl is-active --quiet service kubelet": exit status 1 (306.310578ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (70.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.913754518 start -p stopped-upgrade-582746 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0528 22:14:22.541217 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 22:14:28.137855 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.913754518 start -p stopped-upgrade-582746 --memory=2200 --vm-driver=docker  --container-runtime=crio: (38.415653886s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.913754518 -p stopped-upgrade-582746 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.913754518 -p stopped-upgrade-582746 stop: (2.705066046s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-582746 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-582746 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.345338446s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (70.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-582746
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestPause/serial/Start (49.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-050955 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-050955 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (49.134346244s)
--- PASS: TestPause/serial/Start (49.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (17.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-050955 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0528 22:17:31.182406 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-050955 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.925895275s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (17.97s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-050955 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-050955 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-050955 --output=json --layout=cluster: exit status 2 (307.934628ms)

                                                
                                                
-- stdout --
	{"Name":"pause-050955","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-050955","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-050955 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.86s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-050955 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.86s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.64s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-050955 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-050955 --alsologtostderr -v=5: (2.640695242s)
--- PASS: TestPause/serial/DeletePaused (2.64s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (14.393888385s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-050955
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-050955: exit status 1 (13.001084ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-050955: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-982195 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-982195 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (249.765754ms)

                                                
                                                
-- stdout --
	* [false-982195] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 22:18:44.091979 1523470 out.go:291] Setting OutFile to fd 1 ...
	I0528 22:18:44.092217 1523470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:18:44.092247 1523470 out.go:304] Setting ErrFile to fd 2...
	I0528 22:18:44.092264 1523470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:18:44.092575 1523470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1349783/.minikube/bin
	I0528 22:18:44.093043 1523470 out.go:298] Setting JSON to false
	I0528 22:18:44.094067 1523470 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21672,"bootTime":1716913052,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0528 22:18:44.094170 1523470 start.go:139] virtualization:  
	I0528 22:18:44.097631 1523470 out.go:177] * [false-982195] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0528 22:18:44.100682 1523470 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 22:18:44.100748 1523470 notify.go:220] Checking for updates...
	I0528 22:18:44.106792 1523470 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 22:18:44.109485 1523470 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-1349783/kubeconfig
	I0528 22:18:44.111997 1523470 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1349783/.minikube
	I0528 22:18:44.114379 1523470 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0528 22:18:44.118495 1523470 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 22:18:44.121441 1523470 config.go:182] Loaded profile config "kubernetes-upgrade-510763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:18:44.121623 1523470 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 22:18:44.142566 1523470 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 22:18:44.142672 1523470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 22:18:44.254338 1523470 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-28 22:18:44.244627536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 22:18:44.254444 1523470 docker.go:295] overlay module found
	I0528 22:18:44.257029 1523470 out.go:177] * Using the docker driver based on user configuration
	I0528 22:18:44.259189 1523470 start.go:297] selected driver: docker
	I0528 22:18:44.259206 1523470 start.go:901] validating driver "docker" against <nil>
	I0528 22:18:44.259221 1523470 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 22:18:44.261655 1523470 out.go:177] 
	W0528 22:18:44.263495 1523470 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0528 22:18:44.265533 1523470 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-982195 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-982195

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-982195

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-982195

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-982195

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-982195

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-982195

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-982195

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-982195

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-982195

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-982195

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-982195

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-982195" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-982195" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 May 2024 22:14:29 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-510763
contexts:
- context:
cluster: kubernetes-upgrade-510763
user: kubernetes-upgrade-510763
name: kubernetes-upgrade-510763
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-510763
user:
client-certificate: /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kubernetes-upgrade-510763/client.crt
client-key: /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kubernetes-upgrade-510763/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-982195

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982195"

                                                
                                                
----------------------- debugLogs end: false-982195 [took: 4.103772924s] --------------------------------
helpers_test.go:175: Cleaning up "false-982195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-982195
--- PASS: TestNetworkPlugins/group/false (4.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (167.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-137556 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0528 22:22:25.586100 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-137556 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m47.901351147s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (167.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-137556 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c5495f98-37ab-4164-839b-05ca543b42a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c5495f98-37ab-4164-839b-05ca543b42a7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00313856s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-137556 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-137556 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-137556 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.464390872s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-137556 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-137556 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-137556 --alsologtostderr -v=3: (12.710780306s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-264173 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-264173 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (1m8.768266761s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-137556 -n old-k8s-version-137556
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-137556 -n old-k8s-version-137556: exit status 7 (99.326154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-137556 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-264173 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [937c5b81-94c3-4aa4-b4ec-c20dbc357f2a] Pending
helpers_test.go:344: "busybox" [937c5b81-94c3-4aa4-b4ec-c20dbc357f2a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [937c5b81-94c3-4aa4-b4ec-c20dbc357f2a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003943297s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-264173 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-264173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-264173 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-264173 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-264173 --alsologtostderr -v=3: (12.041406807s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-264173 -n no-preload-264173
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-264173 -n no-preload-264173: exit status 7 (85.020266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-264173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (296.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-264173 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
E0528 22:29:22.540720 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 22:29:28.139138 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-264173 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (4m56.107578869s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-264173 -n no-preload-264173
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (296.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8fm5h" [a9c425db-80ac-467c-addc-7594f93ac73b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004259798s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8fm5h" [a9c425db-80ac-467c-addc-7594f93ac73b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005145725s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-137556 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-lgkhx" [91ab0ad9-b124-4c96-bb9f-ee1f36ccb37a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003749714s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-137556 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-137556 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-137556 -n old-k8s-version-137556
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-137556 -n old-k8s-version-137556: exit status 2 (312.085669ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-137556 -n old-k8s-version-137556
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-137556 -n old-k8s-version-137556: exit status 2 (540.158594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-137556 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-137556 -n old-k8s-version-137556
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-137556 -n old-k8s-version-137556
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-lgkhx" [91ab0ad9-b124-4c96-bb9f-ee1f36ccb37a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004916963s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-264173 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (63.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-209855 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-209855 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (1m3.052486284s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (63.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-264173 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-264173 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-264173 -n no-preload-264173
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-264173 -n no-preload-264173: exit status 2 (318.763564ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-264173 -n no-preload-264173
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-264173 -n no-preload-264173: exit status 2 (330.166485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-264173 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-264173 -n no-preload-264173
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-264173 -n no-preload-264173
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-215422 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-215422 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (56.883544654s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-209855 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [94d24c47-c526-49e6-bef2-d339bd8b59ef] Pending
helpers_test.go:344: "busybox" [94d24c47-c526-49e6-bef2-d339bd8b59ef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [94d24c47-c526-49e6-bef2-d339bd8b59ef] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003279655s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-209855 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-215422 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [baf79e78-2ff7-415e-9d68-f28a33ea2827] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [baf79e78-2ff7-415e-9d68-f28a33ea2827] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003720702s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-215422 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-209855 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-209855 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-209855 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-209855 --alsologtostderr -v=3: (11.956768816s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-215422 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-215422 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-215422 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-215422 --alsologtostderr -v=3: (11.968152479s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-209855 -n embed-certs-209855
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-209855 -n embed-certs-209855: exit status 7 (62.773558ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-209855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (302.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-209855 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-209855 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (5m2.21789583s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-209855 -n embed-certs-209855
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (302.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-215422 -n default-k8s-diff-port-215422
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-215422 -n default-k8s-diff-port-215422: exit status 7 (108.460965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-215422 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-215422 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
E0528 22:33:10.573994 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:33:10.579824 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:33:10.590184 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:33:10.610530 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:33:10.650794 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:33:10.731122 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:33:10.891511 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:33:11.211989 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:33:11.852492 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:33:13.133659 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:33:15.694394 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:33:20.814620 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:33:31.054822 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:33:51.535003 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:34:11.183619 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
E0528 22:34:22.541406 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
E0528 22:34:28.137987 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
E0528 22:34:32.497028 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:34:39.329771 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
E0528 22:34:39.335087 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
E0528 22:34:39.345344 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
E0528 22:34:39.365628 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
E0528 22:34:39.405892 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
E0528 22:34:39.486189 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
E0528 22:34:39.646978 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
E0528 22:34:39.967260 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
E0528 22:34:40.607883 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
E0528 22:34:41.888232 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
E0528 22:34:44.448443 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
E0528 22:34:49.568637 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
E0528 22:34:59.809402 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
E0528 22:35:20.289829 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
E0528 22:35:54.417868 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:36:01.250911 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-215422 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (5m0.49007293s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-215422 -n default-k8s-diff-port-215422
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-llk6q" [24f35962-795f-4c2f-970a-e1fe4ec99d56] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003611241s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-5mh57" [d0af82da-47bd-437e-aa3c-e0677419ed4a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004022937s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-llk6q" [24f35962-795f-4c2f-970a-e1fe4ec99d56] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004314101s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-209855 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-5mh57" [d0af82da-47bd-437e-aa3c-e0677419ed4a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004528769s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-215422 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-209855 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-209855 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-209855 -n embed-certs-209855
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-209855 -n embed-certs-209855: exit status 2 (363.503689ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-209855 -n embed-certs-209855
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-209855 -n embed-certs-209855: exit status 2 (403.387306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-209855 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-209855 -n embed-certs-209855
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-209855 -n embed-certs-209855
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-215422 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-215422 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-215422 --alsologtostderr -v=1: (1.014700461s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-215422 -n default-k8s-diff-port-215422
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-215422 -n default-k8s-diff-port-215422: exit status 2 (487.010349ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-215422 -n default-k8s-diff-port-215422
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-215422 -n default-k8s-diff-port-215422: exit status 2 (438.121167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-215422 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-215422 --alsologtostderr -v=1: (1.025382597s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-215422 -n default-k8s-diff-port-215422
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-215422 -n default-k8s-diff-port-215422
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (57.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-195575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-195575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (57.523958068s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (57.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (61.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-982195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0528 22:37:23.171059 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-982195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m1.621859396s)
--- PASS: TestNetworkPlugins/group/auto/Start (61.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-195575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-195575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.731869423s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-195575 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-195575 --alsologtostderr -v=3: (1.316416573s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-195575 -n newest-cni-195575
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-195575 -n newest-cni-195575: exit status 7 (88.985191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-195575 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-195575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-195575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.1: (17.410098971s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-195575 -n newest-cni-195575
E0528 22:38:10.574268 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-982195 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-982195 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rqwb2" [36122143-2f36-4340-842b-0fbaa7ac6642] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-rqwb2" [36122143-2f36-4340-842b-0fbaa7ac6642] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.007096112s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-982195 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-982195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-982195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-195575 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-195575 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-195575 -n newest-cni-195575
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-195575 -n newest-cni-195575: exit status 2 (291.322971ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-195575 -n newest-cni-195575
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-195575 -n newest-cni-195575: exit status 2 (310.635034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-195575 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-195575 -n newest-cni-195575
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-195575 -n newest-cni-195575
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.89s)
E0528 22:43:37.130467 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/auto-982195/client.crt: no such file or directory
E0528 22:43:56.987621 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/default-k8s-diff-port-215422/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-982195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-982195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (55.122888618s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-982195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0528 22:38:38.258785 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/old-k8s-version-137556/client.crt: no such file or directory
E0528 22:39:05.586590 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-982195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m18.399100145s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ktlpx" [d0f15dc4-1bbb-4ee5-9703-06aa5f7b4350] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004110599s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-982195 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-982195 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7l9hx" [72267fbd-0e2e-4088-82b1-3513f9bfb469] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0528 22:39:22.540388 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/functional-486060/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-7l9hx" [72267fbd-0e2e-4088-82b1-3513f9bfb469] Running
E0528 22:39:28.137911 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/addons-504712/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003608892s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-982195 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-982195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-982195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-g2wsx" [5eafc5fa-5a62-4cb5-be79-ccedd30d1910] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005547491s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-982195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-982195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.474640591s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-982195 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-982195 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pc4x7" [b2e761af-ffbb-4681-96c5-41db2e06e072] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-pc4x7" [b2e761af-ffbb-4681-96c5-41db2e06e072] Running
E0528 22:40:07.011626 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/no-preload-264173/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004313405s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-982195 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-982195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-982195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-982195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-982195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m28.161408569s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-982195 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-982195 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kkm25" [bb2c5580-fcd2-45fa-8d54-ae123795ab78] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kkm25" [bb2c5580-fcd2-45fa-8d54-ae123795ab78] Running
E0528 22:41:13.143687 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/default-k8s-diff-port-215422/client.crt: no such file or directory
E0528 22:41:13.148916 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/default-k8s-diff-port-215422/client.crt: no such file or directory
E0528 22:41:13.159111 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/default-k8s-diff-port-215422/client.crt: no such file or directory
E0528 22:41:13.179303 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/default-k8s-diff-port-215422/client.crt: no such file or directory
E0528 22:41:13.219619 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/default-k8s-diff-port-215422/client.crt: no such file or directory
E0528 22:41:13.299922 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/default-k8s-diff-port-215422/client.crt: no such file or directory
E0528 22:41:13.460312 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/default-k8s-diff-port-215422/client.crt: no such file or directory
E0528 22:41:13.780921 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/default-k8s-diff-port-215422/client.crt: no such file or directory
E0528 22:41:14.421609 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/default-k8s-diff-port-215422/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003757949s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-982195 exec deployment/netcat -- nslookup kubernetes.default
E0528 22:41:15.702466 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/default-k8s-diff-port-215422/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-982195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-982195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-982195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0528 22:41:54.105903 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/default-k8s-diff-port-215422/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-982195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m11.623994224s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-982195 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-982195 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bcqt9" [1c0f2326-75fc-452b-ab91-cf1625f64e20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bcqt9" [1c0f2326-75fc-452b-ab91-cf1625f64e20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.003181358s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-982195 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-982195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-982195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-982195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-982195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m31.0998105s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-r98pt" [c0195d35-826c-492c-82e0-a4f423e89b44] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005135329s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-982195 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-982195 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-d6rq6" [51571d6a-ffd4-481c-b072-38aa3613848a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0528 22:42:56.168614 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/auto-982195/client.crt: no such file or directory
E0528 22:42:56.173864 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/auto-982195/client.crt: no such file or directory
E0528 22:42:56.184054 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/auto-982195/client.crt: no such file or directory
E0528 22:42:56.204302 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/auto-982195/client.crt: no such file or directory
E0528 22:42:56.244647 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/auto-982195/client.crt: no such file or directory
E0528 22:42:56.324949 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/auto-982195/client.crt: no such file or directory
E0528 22:42:56.485332 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/auto-982195/client.crt: no such file or directory
E0528 22:42:56.805504 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/auto-982195/client.crt: no such file or directory
E0528 22:42:57.445821 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/auto-982195/client.crt: no such file or directory
E0528 22:42:58.727364 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/auto-982195/client.crt: no such file or directory
E0528 22:43:01.287891 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/auto-982195/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-d6rq6" [51571d6a-ffd4-481c-b072-38aa3613848a] Running
E0528 22:43:06.409058 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/auto-982195/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.00316948s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-982195 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-982195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-982195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-982195 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-982195 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hg4pb" [b096a821-5262-4b63-85bd-38bac527fb0e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0528 22:44:11.926719 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kindnet-982195/client.crt: no such file or directory
E0528 22:44:11.932080 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kindnet-982195/client.crt: no such file or directory
E0528 22:44:11.942356 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kindnet-982195/client.crt: no such file or directory
E0528 22:44:11.962803 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kindnet-982195/client.crt: no such file or directory
E0528 22:44:12.004495 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kindnet-982195/client.crt: no such file or directory
E0528 22:44:12.084752 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kindnet-982195/client.crt: no such file or directory
E0528 22:44:12.245090 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kindnet-982195/client.crt: no such file or directory
E0528 22:44:12.565790 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kindnet-982195/client.crt: no such file or directory
E0528 22:44:13.206353 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kindnet-982195/client.crt: no such file or directory
E0528 22:44:14.486576 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kindnet-982195/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-hg4pb" [b096a821-5262-4b63-85bd-38bac527fb0e] Running
E0528 22:44:17.047527 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kindnet-982195/client.crt: no such file or directory
E0528 22:44:18.091359 1355197 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/auto-982195/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00369413s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-982195 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-982195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-982195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (30/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.52s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-966905 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-966905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-966905
--- SKIP: TestDownloadOnlyKic (0.52s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-624590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-624590
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-982195 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-982195

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-982195

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-982195

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-982195

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-982195

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-982195

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-982195

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-982195

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-982195

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-982195

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-982195

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-982195" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-982195" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 May 2024 22:14:29 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-510763
contexts:
- context:
cluster: kubernetes-upgrade-510763
user: kubernetes-upgrade-510763
name: kubernetes-upgrade-510763
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-510763
user:
client-certificate: /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kubernetes-upgrade-510763/client.crt
client-key: /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kubernetes-upgrade-510763/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-982195

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982195"

                                                
                                                
----------------------- debugLogs end: kubenet-982195 [took: 4.082281716s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-982195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-982195
--- SKIP: TestNetworkPlugins/group/kubenet (4.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-982195 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-982195" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-982195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-982195" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18966-1349783/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 May 2024 22:14:29 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-510763
contexts:
- context:
cluster: kubernetes-upgrade-510763
user: kubernetes-upgrade-510763
name: kubernetes-upgrade-510763
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-510763
user:
client-certificate: /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kubernetes-upgrade-510763/client.crt
client-key: /home/jenkins/minikube-integration/18966-1349783/.minikube/profiles/kubernetes-upgrade-510763/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-982195

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-982195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982195"

                                                
                                                
----------------------- debugLogs end: cilium-982195 [took: 5.312928863s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-982195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-982195
--- SKIP: TestNetworkPlugins/group/cilium (5.50s)

                                                
                                    
Copied to clipboard